Mohamed Houri’s Oracle Notes

August 19, 2020

A 2 min tuning case

Filed under: Oracle — hourim @ 1:20 pm

Here’s a practical performance troubleshooting case which took me a couple of minutes to fix but about one hour to write down 😊. It comes via an end user complaint about a load process (insert/select) triggered by Oracle Warehouse Builder(OWB) which was running since the last 3 days. When the on-call DBA handled this request to me, he said that he has already run a SQL tuning advisor and he was still unable to find the root cause of this performance degradation. Before I started the investigation I already knew what was happening in this case. Indeed, almost always, if a query lasts a couple of days without erroring out, then there is a big chance that a NESTED LOOPS is certainly for something in this delay. And, in this case, the CBO would have probably messed up its cardinality estimation by opting for the wrong JOIN method. That is why, based upon the DBA explanation, I have had, already, in my mind that this ETL load process is suffering from a lack of table/index/column statistics

Anyway, below is exactly the steps I followed to fix this issue

-- Get the SQL_ID of the running process
SQL> select sql_id, sql_exec_id from gv$sql_monitor where status = 'EXECUTING';

Hopefully, there was only a single SQL_ID in an EXECUTION phase.

-- Get the corresponding SQL Monitoring report
SQL> SELECT DBMS_SQLTUNE.report_sql_monitor(
  		sql_id       => ' bsgj02zjrfdbj',
  		type         => 'TEXT',
  		report_level => 'ALL') AS report
FROM dual;

Global Information
------------------------------
 Status                                 :  EXECUTING           
 Instance ID                            :  1                   
 Session                                :  XYZDATA (245:14813) 
 SQL ID                                 :  bsgj02zjrfdbj       
 SQL Execution ID                       :  16777227            
 Execution Started                      :  10/31/2019 14:26:21 
 First Refresh Time                     :  10/31/2019 14:26:33 
 Last Refresh Time                      :  11/04/2019 10:56:29 
 Duration                               :  333009s             
 Module/Action                          :  CCC_CCS/EXECUTION   
 Service                                :  MYAPL_1             
 Program                                :  JDBC Thin Client    
 PLSQL Entry Ids (Object/Subprogram)    :  10328167,7          
 PLSQL Current Ids (Object/Subprogram)  :  10328167,7          

Global Stats
======================================================================
| Elapsed |   Cpu   |    IO    | Concurrency | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) |  Waits(s)   |  Gets  | Reqs | Bytes |
======================================================================
|  553389 |  329894 |   223477 |          18 |     4G |   4G | 104TB |
======================================================================

SQL Plan Monitoring Details (Plan Hash Value=2281702856)

================================================================================================
| Id   |               Operation                |      Name       |  Rows   | Execs |   Rows   |
|      |                                        |                 | (Estim) |       | (Actual) |
================================================================================================
|    0 | INSERT STATEMENT                       |                 |         |     1 |          |
|    1 |   LOAD TABLE CONVENTIONAL              |                 |         |     1 |        0 |
|      |                                        |                 |         |       |          |
|    2 |    NESTED LOOPS OUTER                  |                 |      2M |     1 |    19932 |
|    3 |     PARTITION LIST OR                  |                 |      2M |     1 |    19932 |
| -> 4 |      TABLE ACCESS FULL                 | TT_XYZ_BAE      |      2M |     1 |    19932 |
|    5 |     PARTITION RANGE ITERATOR           |                 |       1 | 19932 |    19932 |
|    6 |      TABLE ACCESS BY LOCAL INDEX ROWID | T1_TAB          |       1 | 19932 |    19932 |
|      |                                        |                 |         |       |          |
|      |                                        |                 |         |       |          |
| -> 7 |       BITMAP CONVERSION TO ROWIDS      |                 |         | 19932 |       4G |
| -> 8 |        BITMAP INDEX SINGLE VALUE       | XY99_T1_TAB_BRI |         | 19932 |       1M |
|      |                                        |                 |         |       |          |
================================================================================================

As I’ve expected, there is a NESTED LOOPS operation in the mix(operation Id n°2). In addition, the cardinality estimation of 1 at line operation 6 is almost always an indication of a wrong estimation done by the CBO. Add to this the 4G rows generated when visiting the XY99_T1_TAB_BRI bitmap index and you end up by suspecting a statistic issue with the T1_TAB table.

So, the next immediate step I did was to check the statistics of the T1_TAB table and the XY99_T1_TAB_BRI bitmap index :

SQL> @GetPartTab
Enter value for owner: XYZDATA
Enter value for table_name: T1_TAB

TABLE_NAME   PNAME                PPOS   NUM_ROWS TABLESPACE_NAME  LAST_ANAL
------------ ------------------ ------ ---------- ---------------- -------------------
T1_TAB       P_T1_TAB_200001         1          0 XY99_TBS_DONN    26/10/2019 15:26:55
T1_TAB       P_T1_TAB_201708         2    3934618 XY99_TBS_DONN    26/10/2019 15:21:50
T1_TAB       P_T1_TAB_201709         3    3948128 XY99_TBS_DONN    26/10/2019 14:35:11
T1_TAB       P_T1_TAB_201710         4    4078620 XY99_TBS_DONN    26/10/2019 13:47:20
T1_TAB       P_T1_TAB_201711         5    4093099 XY99_TBS_DONN    26/10/2019 12:52:13
T1_TAB       P_T1_TAB_201712         6    4059354 XY99_TBS_DONN    26/10/2019 11:53:52
T1_TAB       P_T1_TAB_201801         7    4080096 XY99_TBS_DONN    26/10/2019 10:58:33
T1_TAB       P_T1_TAB_201802         8    4096904 XY99_TBS_DONN    26/10/2019 09:58:16
T1_TAB       P_T1_TAB_201803         9    4115548 XY99_TBS_DONN    26/10/2019 08:59:04
T1_TAB       P_T1_TAB_201804        10    4175894 XY99_TBS_DONN    26/10/2019 07:54:15
T1_TAB       P_T1_TAB_201805        11    4187923 XY99_TBS_DONN    26/10/2019 07:05:42
T1_TAB       P_T1_TAB_201806        12    4224300 XY99_TBS_DONN    26/10/2019 06:21:28
T1_TAB       P_T1_TAB_201807        13    4233116 XY99_TBS_DONN    26/10/2019 05:30:46
T1_TAB       P_T1_TAB_201808        14    4242821 XY99_TBS_DONN    26/10/2019 04:42:50
T1_TAB       P_T1_TAB_201809        15    4291935 XY99_TBS_DONN    26/10/2019 03:49:06
T1_TAB       P_T1_TAB_201810        16    2596486 XY99_TBS_DONN    26/10/2019 02:57:36
T1_TAB       P_T1_TAB_201811        17    4305042 XY99_TBS_DONN    26/10/2019 02:08:20
T1_TAB       P_T1_TAB_201812        18    4314987 XY99_TBS_DONN    26/10/2019 01:12:00
T1_TAB       P_T1_TAB_201901        19    4333586 XY99_TBS_DONN    26/10/2019 00:21:52
T1_TAB       P_T1_TAB_201902        20    4428940 XY99_TBS_DONN    25/10/2019 23:22:28
T1_TAB       P_T1_TAB_201903        21    4493339 XY99_TBS_DONN    25/10/2019 22:17:44
T1_TAB       P_T1_TAB_201904        22    4420400 XY99_TBS_DONN    25/10/2019 20:54:34
T1_TAB       P_T1_TAB_201905        23    4408412 XY99_TBS_DONN    25/10/2019 19:43:10
T1_TAB       P_T1_TAB_201906        24    4277786 XY99_TBS_DONN    25/10/2019 18:49:03
T1_TAB       P_T1_TAB_201907        25    4250320 XY99_TBS_DONN    25/10/2019 17:39:26
T1_TAB       P_T1_TAB_201908        26          0 XY99_TBS_DONN    25/10/2019 16:53:29  --> looks suspicious
                                       ----------
Total num_rows                         99591654

SQL> @GetPartInd
Enter value for owner: XYZDATA
Enter value for index_name: XY99_T1_TAB_BRI

INDEX_NAME        PNAME              PPOS   NUM_ROWS TABLESPACE_NAME LAST_ANAL
----------------- ---------------- ------ ---------- --------------- -------------------
XY99_T1_TAB_BRI   P_T1_TAB_200001       1          0 XY99_TBS_INDX   26/10/2019 15:26:56
XY99_T1_TAB_BRI   P_T1_TAB_201708       2         67 XY99_TBS_INDX   26/10/2019 15:22:13
XY99_T1_TAB_BRI   P_T1_TAB_201709       3         64 XY99_TBS_INDX   26/10/2019 14:35:34
XY99_T1_TAB_BRI   P_T1_TAB_201710       4         66 XY99_TBS_INDX   26/10/2019 13:47:40
XY99_T1_TAB_BRI   P_T1_TAB_201711       5         67 XY99_TBS_INDX   26/10/2019 12:52:32
XY99_T1_TAB_BRI   P_T1_TAB_201712       6         66 XY99_TBS_INDX   26/10/2019 11:54:15
XY99_T1_TAB_BRI   P_T1_TAB_201801       7         66 XY99_TBS_INDX   26/10/2019 10:58:57
XY99_T1_TAB_BRI   P_T1_TAB_201802       8         67 XY99_TBS_INDX   26/10/2019 09:58:44
XY99_T1_TAB_BRI   P_T1_TAB_201803       9         67 XY99_TBS_INDX   26/10/2019 08:59:31
XY99_T1_TAB_BRI   P_T1_TAB_201804      10         68 XY99_TBS_INDX   26/10/2019 07:54:36
XY99_T1_TAB_BRI   P_T1_TAB_201805      11         68 XY99_TBS_INDX   26/10/2019 07:06:05
XY99_T1_TAB_BRI   P_T1_TAB_201806      12         69 XY99_TBS_INDX   26/10/2019 06:21:52
XY99_T1_TAB_BRI   P_T1_TAB_201807      13         69 XY99_TBS_INDX   26/10/2019 05:31:09
XY99_T1_TAB_BRI   P_T1_TAB_201808      14         69 XY99_TBS_INDX   26/10/2019 04:43:13
XY99_T1_TAB_BRI   P_T1_TAB_201809      15         72 XY99_TBS_INDX   26/10/2019 03:49:29
XY99_T1_TAB_BRI   P_T1_TAB_201810      16        200 XY99_TBS_INDX   26/10/2019 02:57:51
XY99_T1_TAB_BRI   P_T1_TAB_201811      17         70 XY99_TBS_INDX   26/10/2019 02:08:46
XY99_T1_TAB_BRI   P_T1_TAB_201812      18         70 XY99_TBS_INDX   26/10/2019 01:12:25
XY99_T1_TAB_BRI   P_T1_TAB_201901      19         70 XY99_TBS_INDX   26/10/2019 00:22:16
XY99_T1_TAB_BRI   P_T1_TAB_201902      20         72 XY99_TBS_INDX   25/10/2019 23:22:55
XY99_T1_TAB_BRI   P_T1_TAB_201903      21         73 XY99_TBS_INDX   25/10/2019 22:18:13
XY99_T1_TAB_BRI   P_T1_TAB_201904      22         72 XY99_TBS_INDX   25/10/2019 20:55:06
XY99_T1_TAB_BRI   P_T1_TAB_201905      23         72 XY99_TBS_INDX   25/10/2019 19:43:34
XY99_T1_TAB_BRI   P_T1_TAB_201906      24         70 XY99_TBS_INDX   25/10/2019 18:49:27
XY99_T1_TAB_BRI   P_T1_TAB_201907      25         70 XY99_TBS_INDX   25/10/2019 17:39:51
XY99_T1_TAB_BRI   P_T1_TAB_201908      26          0 XY99_TBS_INDX   25/10/2019 16:53:31 --> looks suspicious
                                          ----------
Total num_rows                                  1784

The 26th partition looks suspicious. For the sake of simplicity and quick fix, I asked the end user whether their process deals with the last created partition? and the answer was YES.

Then, I killed the session, gathered the statistics, and asked the end-user to relaunch the process

SQL> @GetPartTab
Enter value for owner: XYZDATA
Enter value for table_name: T1_TAB

TABLE_NAME    PNAME               PPOS   NUM_ROWS TABLESPACE_NAME  LAST_ANAL
------------- ----------------- ------ ---------- ---------------- -------------------
T1_TAB        P_T1_TAB_200001        1          0 XY99_TBS_DONN    26/10/2019 15:26:55
T1_TAB        P_T1_TAB_201708        2    3934618 XY99_TBS_DONN    26/10/2019 15:21:50
T1_TAB        P_T1_TAB_201709        3    3948128 XY99_TBS_DONN    26/10/2019 14:35:11
T1_TAB        P_T1_TAB_201710        4    4078620 XY99_TBS_DONN    26/10/2019 13:47:20
T1_TAB        P_T1_TAB_201711        5    4093099 XY99_TBS_DONN    26/10/2019 12:52:13
T1_TAB        P_T1_TAB_201712        6    4059354 XY99_TBS_DONN    26/10/2019 11:53:52
T1_TAB        P_T1_TAB_201801        7    4080096 XY99_TBS_DONN    26/10/2019 10:58:33
T1_TAB        P_T1_TAB_201802        8    4096904 XY99_TBS_DONN    26/10/2019 09:58:16
T1_TAB        P_T1_TAB_201803        9    4115548 XY99_TBS_DONN    26/10/2019 08:59:04
T1_TAB        P_T1_TAB_201804       10    4175894 XY99_TBS_DONN    26/10/2019 07:54:15
T1_TAB        P_T1_TAB_201805       11    4187923 XY99_TBS_DONN    26/10/2019 07:05:42
T1_TAB        P_T1_TAB_201806       12    4224300 XY99_TBS_DONN    26/10/2019 06:21:28
T1_TAB        P_T1_TAB_201807       13    4233116 XY99_TBS_DONN    26/10/2019 05:30:46
T1_TAB        P_T1_TAB_201808       14    4242821 XY99_TBS_DONN    26/10/2019 04:42:50
T1_TAB        P_T1_TAB_201809       15    4291935 XY99_TBS_DONN    26/10/2019 03:49:06
T1_TAB        P_T1_TAB_201810       16    2596486 XY99_TBS_DONN    26/10/2019 02:57:36
T1_TAB        P_T1_TAB_201811       17    4305042 XY99_TBS_DONN    26/10/2019 02:08:20
T1_TAB        P_T1_TAB_201812       18    4314987 XY99_TBS_DONN    26/10/2019 01:12:00
T1_TAB        P_T1_TAB_201901       19    4333586 XY99_TBS_DONN    26/10/2019 00:21:52
T1_TAB        P_T1_TAB_201902       20    4428940 XY99_TBS_DONN    25/10/2019 23:22:28
T1_TAB        P_T1_TAB_201903       21    4493339 XY99_TBS_DONN    25/10/2019 22:17:44
T1_TAB        P_T1_TAB_201904       22    4420400 XY99_TBS_DONN    25/10/2019 20:54:34
T1_TAB        P_T1_TAB_201905       23    4408412 XY99_TBS_DONN    25/10/2019 19:43:10
T1_TAB        P_T1_TAB_201906       24    4277786 XY99_TBS_DONN    25/10/2019 18:49:03
T1_TAB        P_T1_TAB_201907       25    4250320 XY99_TBS_DONN    25/10/2019 17:39:26
T1_TAB        P_T1_TAB_201908       26    4185925 XY99_TBS_DONN    04/11/2019 18:51:48
                                       ----------                  
Total num_rows                         103777579                   


SQL> @getPartInd
Enter value for owner: XYZDATA
Enter value for index_name: XY99_T1_TAB_BRI

INDEX_NAME        PNAME               PPOS   NUM_ROWS TABLESPACE_NAME   LAST_ANAL
----------------- ----------------- ------ ---------- ----------------- -------------------
XY99_T1_TAB_BRI   P_T1_TAB_200001        1          0 XY99_TBS_INDX     26/10/2019 15:26:56
XY99_T1_TAB_BRI   P_T1_TAB_201708        2         67 XY99_TBS_INDX     26/10/2019 15:22:13
XY99_T1_TAB_BRI   P_T1_TAB_201709        3         64 XY99_TBS_INDX     26/10/2019 14:35:34
XY99_T1_TAB_BRI   P_T1_TAB_201710        4         66 XY99_TBS_INDX     26/10/2019 13:47:40
XY99_T1_TAB_BRI   P_T1_TAB_201711        5         67 XY99_TBS_INDX     26/10/2019 12:52:32
XY99_T1_TAB_BRI   P_T1_TAB_201712        6         66 XY99_TBS_INDX     26/10/2019 11:54:15
XY99_T1_TAB_BRI   P_T1_TAB_201801        7         66 XY99_TBS_INDX     26/10/2019 10:58:57
XY99_T1_TAB_BRI   P_T1_TAB_201802        8         67 XY99_TBS_INDX     26/10/2019 09:58:44
XY99_T1_TAB_BRI   P_T1_TAB_201803        9         67 XY99_TBS_INDX     26/10/2019 08:59:31
XY99_T1_TAB_BRI   P_T1_TAB_201804       10         68 XY99_TBS_INDX     26/10/2019 07:54:36
XY99_T1_TAB_BRI   P_T1_TAB_201805       11         68 XY99_TBS_INDX     26/10/2019 07:06:05
XY99_T1_TAB_BRI   P_T1_TAB_201806       12         69 XY99_TBS_INDX     26/10/2019 06:21:52
XY99_T1_TAB_BRI   P_T1_TAB_201807       13         69 XY99_TBS_INDX     26/10/2019 05:31:09
XY99_T1_TAB_BRI   P_T1_TAB_201808       14         69 XY99_TBS_INDX     26/10/2019 04:43:13
XY99_T1_TAB_BRI   P_T1_TAB_201809       15         72 XY99_TBS_INDX     26/10/2019 03:49:29
XY99_T1_TAB_BRI   P_T1_TAB_201810       16        200 XY99_TBS_INDX     26/10/2019 02:57:51
XY99_T1_TAB_BRI   P_T1_TAB_201811       17         70 XY99_TBS_INDX     26/10/2019 02:08:46
XY99_T1_TAB_BRI   P_T1_TAB_201812       18         70 XY99_TBS_INDX     26/10/2019 01:12:25
XY99_T1_TAB_BRI   P_T1_TAB_201901       19         70 XY99_TBS_INDX     26/10/2019 00:22:16
XY99_T1_TAB_BRI   P_T1_TAB_201902       20         72 XY99_TBS_INDX     25/10/2019 23:22:55
XY99_T1_TAB_BRI   P_T1_TAB_201903       21         73 XY99_TBS_INDX     25/10/2019 22:18:13
XY99_T1_TAB_BRI   P_T1_TAB_201904       22         72 XY99_TBS_INDX     25/10/2019 20:55:06
XY99_T1_TAB_BRI   P_T1_TAB_201905       23         72 XY99_TBS_INDX     25/10/2019 19:43:34
XY99_T1_TAB_BRI   P_T1_TAB_201906       24         70 XY99_TBS_INDX     25/10/2019 18:49:27
XY99_T1_TAB_BRI   P_T1_TAB_201907       25         70 XY99_TBS_INDX     25/10/2019 17:39:51
XY99_T1_TAB_BRI   P_T1_TAB_201908       26         68 XY99_TBS_INDX     04/11/2019 18:52:14
                                           ----------
Total num_rows                                   1852

When this insert/select has been relaunched it has completed in less than one hour as shown by the corresponding SQL monitoring report:

Global Information
------------------------------
 Status                                 :  EXECUTING           
 Instance ID                            :  1                   
 Session                                :  XYZDATA (245:14813) 
 SQL ID                                 :  bsgj02zjrfdbj       
 SQL Execution ID                       :  16777216           
 Execution Started                      :  11/05/2019 10:02:17
 First Refresh Time                     :  11/05/2019 10:02:24
 Last Refresh Time                      :  11/05/2019 11:00:03
 Duration                               :  3466s     -------------------> The new execution time        
 Module/Action                          :  CCC_CCS/EXECUTION   
 Service                                :  MYAPL_1             
 Program                                :  JDBC Thin Client    
 PLSQL Entry Ids (Object/Subprogram)    :  10328167,7          
 PLSQL Current Ids (Object/Subprogram)  :  10328167,7          

-- The new execution plan
Global Stats
======================================================================
| Elapsed |   Cpu   |    IO    | Concurrency | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) |  Waits(s)   |  Gets  | Reqs | Bytes |
======================================================================
|  3766   |  607    |   3158   |         0.00|      5M|  219K|  142GB|
======================================================================

SQL Plan Monitoring Details (Plan Hash Value=2281702856)
================================================================================================
| Id   |               Operation                |      Name       |  Rows   | Execs |   Rows   |
|      |                                        |                 | (Estim) |       | (Actual) |
================================================================================================
|    0 | INSERT STATEMENT                       |                 |         |     1 |          |
|    1 |   LOAD TABLE CONVENTIONAL              |                 |         |     1 |          |
|    2 |    HASH JOIN OUTER                     |                 |      2M |     1 |        0 |
|    3 |     PARTITION LIST OR                  |                 |      2M |     1 |     971K |
|    4 |      TABLE ACCESS FULL                 | TT_XYZ_BAE      |      2M |     2 |     971K |
|    5 |     PARTITION RANGE ITERATOR           |                 |     104M|     1 |     100M |
|    6 |      TABLE ACCESS FULL                 | T1_TAB          |     104M|    26 |     100M |
================================================================================================

Bottom Line

This simple note shows how diagnosing and fixing a real-life performance issue can, sometimes, be accomplished in a couple of minutes (in this case the time fixing equals to the duration of the call to dbms_stats package). It will not be always as obvious as the current case tends to indicate but, I can say that I’ve very often spent more time answering, explaining, and writing on how I have fixed a performance issue than the time it took me to diagnose and fix this performance pain.

August 9, 2020

SET operation to join

Filed under: Oracle — hourim @ 10:06 am

In this article about semi-join in modern relational databases I’ve shown that, neither Oracle nor Postgres, are able to transform a SET operation like INTERSECT or MINUS into a SEMI-JOIN. Jonathan Lewis has, however, explained in a comment to this article, that Oracle has been, in fact, able to handle conversions from INTERSECT (and minus) to semi join (and anti-join) since 10g. But the conversion is only possible by changing the default value of the _convert_set_to_join parameter from FALSE to TRUE. Starting from 19.1.0.1 the conversion is now done automatically because Oracle has finally decided to let this transformation happening by default as outlined by Tanel Poder cofep script

SQL> @cofep 12.2.0.1 19.1.0.1
Compare Optimizer_Features_Enable Parameter differences
for values 12.2.0.1 and 19.1.0.1

PARAMETER              '12.2.0.1'    '19.1.0.1'   DESCRIPTION
---------------------- ------------- ------------ --------------------------------------------
_convert_set_to_join   FALSE         TRUE         enables conversion of set operator to join

Here’s below a simple demonstration

create table t1
as 
select
   rownum n1
  ,mod(rownum,5) n2
  ,trunc( (rownum -1/3)) n3
from
  dual
connect by
  level <= 1e4;  
  
create table t2
as 
select
   rownum n1
  ,mod(rownum,3) n2
  ,trunc( (rownum -1/5)) n3
from
  dual
connect by
  level <= 5;    

exec dbms_stats.gather_table_stats(user, ‘t1’) ;
exec dbms_stats.gather_table_stats(user, ‘t2’) ;
SQL> show parameter optimizer_features

PARAMETER_NAME                  TYPE        VALUE
------------------------------- ----------- ---------
optimizer_features_enable       string      19.1.0

SQL> @pd _convert_set
Show all parameters and session values from x$ksppi/x$ksppcv...

       NUM N_HEX NAME                  VALUE    DESCRIPTION
---------- ----- --------------------- -------- ------------------------------------------
      3718   E86 _convert_set_to_join  FALSE    enables conversion of set operator to join

SQL> select t2.n1 from t2 intersect select t1.n1 from t1;

        N1
----------
         1
         2
         3
         4
         5

SQL> select * from table(dbms_xplan.display_cursor);

----------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |
----------------------------------------------------
|   0 | SELECT STATEMENT    |      |       |       |
|   1 |  INTERSECTION       |      |       |       |
|   2 |   SORT UNIQUE       |      |     5 |    15 |
|   3 |    TABLE ACCESS FULL| T2   |     5 |    15 |
|   4 |   SORT UNIQUE       |      | 10000 | 40000 |
|   5 |    TABLE ACCESS FULL| T1   | 10000 | 40000 |
----------------------------------------------------

As you can see the intersect operation has not been transformed into a semi-join because the corresponding hidden parameter is by default set to FALSE.

But let’s now do the same experiment under the new optimizer 19.1.0.1 and check the difference:

SQL> alter session set optimizer_features_enable='19.1.0.1';

Session altered.

SQL> select t2.n1 from t2 intersect select t1.n1 from t1;

        N1
----------
         1
         2
         4
         5
         3

SQL> @xpsimp

SQL_ID  0atbzgvynmj51, child number 1
-------------------------------------
select t2.n1 from t2 intersect select t1.n1 from t1

Plan hash value: 3703458891
----------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |
----------------------------------------------------
|   0 | SELECT STATEMENT    |      |       |       |
|   1 |  HASH UNIQUE        |      |     5 |    35 |
|*  2 |   HASH JOIN SEMI    |      |     5 |    35 |
|   3 |    TABLE ACCESS FULL| T2   |     5 |    15 |
|   4 |    TABLE ACCESS FULL| T1   | 10000 | 40000 |
----------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access(SYS_OP_MAP_NONNULL("T2"."N1")=SYS_OP_MAP_NONNULL("T1"."N1"))

Spot now how the INTERSECT set operation has been automatically transformed into a HASH JOIN SEMI operation. This becomes possible by default because, starting from optimier_features_enable 19.1.0.1, the default value of the hidden parameter _convert_set_to_join is set to TRUE.

While I was trying to print out the execution plan non-sharing reason I spotted out something interesting:

SQL> @nonshared 0atbzgvynmj51
Show why existing SQL child cursors were not reused (V$SQL_SHARED_CURSOR)...

SQL_ID                        : 0atbzgvynmj51
ADDRESS                       : 00007FFB65E682E0
CHILD_ADDRESS                 : 00007FFB65E65D80
CHILD_NUMBER                  : 0
HASH_MATCH_FAILED             : Y
REASON                        : <ChildNode><ChildNumber>0</ChildNumber><ID>3</ID>
                                <reason>Optimizer mismatch(12)</reason><size>2x476</size>
								<optimizer_features_enable> 19.1.0  19.1.0.1  </optimizer_features_enable>
								<_optimizer_undo_cost_change> 19.1.0  19.1.0.1 </_optimizer_undo_cost_change>
								<_convert_set_to_join> false true              </_convert_set_to_join>
							    <_subquery_pruning_mv_enabled> false  true     </_subquery_pruning_mv_enabled>
							    <_optimizer_push_down_distinct> 0   5   </_optimizer_push_down_distinct>
							    <_fix_control_key> 0  260074645         </_fix_control_key>
							    <_px_shared_hash_join> false  true      </_px_shared_hash_join>
							    </ChildNode>
CON_ID                        : 1
-----------------
SQL_ID                        : 0atbzgvynmj51
ADDRESS                       : 00007FFB65E682E0
CHILD_ADDRESS                 : 00007FFB65C23F38
CHILD_NUMBER                  : 1
OPTIMIZER_MISMATCH            : Y
REASON                        :
CON_ID                        : 1
-----------------

PL/SQL procedure successfully completed.

It looks like Oracle is now showing all the new optimizer parameters that have changed between the optimizer used to compile child number 0 and that used to compile child cursor n°1. This is confirmed using Tanel Poder cofep script

SQL> @cofep 19.1.0 19.1.0.1
Compare Optimizer_Features_Enable Parameter differences
for values 19.1.0 and 19.1.0.1

PARAMETER                       '19.1.0'   '19.1.0.1' DESCRIPTION
------------------------------- ---------- ---------- ----------------------------------------------------------------------
_optimizer_push_down_distinct   0          5          push down distinct from query block to table
optimizer_features_enable       19.1.0     19.1.0.1   optimizer plan compatibility parameter
_optimizer_undo_cost_change     19.1.0     19.1.0.1   optimizer undo cost change
_subquery_pruning_mv_enabled    FALSE      TRUE       enable the use of subquery predicates with MVs to perform pruning
_convert_set_to_join            FALSE      TRUE       enables conversion of set operator to join
_px_shared_hash_join            FALSE      TRUE       enable/disable shared hash join

6 rows selected.

August 6, 2020

Historical column histogram

Filed under: Oracle — hourim @ 4:03 pm

Tracking column histogram modifications is not a common task among the Oracle DBA and developer’s sphere. And, why one would want to know the past column histogram values? In this blog post I intend to give a script serving the first purpose and show an example where this script can be of a great help.

1. Historical column histogram values

Here’s below the script which I have named h_hist1 (you will see later why I decided to use this script name):

/* ----------------------------------------------------------------------------------|
|Author : Mohamed Houri                                                              |
|Date   : 03/07/2020                                                                 |
|Scope  : This script gives historical column histogram values                       |
|          -- I am using sys.WRI$_OPTSTAT_HISTHEAD_HISTORY for this purpose			 |		
|          -- I am only able to say whether, previously, there was HISTOGRAM or not  |
|          -- I can't show the historical type of Histogram 				         |
|																					 |
|Usage  :  SQL> @h_hist1                                                             |
|			Enter value for table_name: t1                                           |
|			Enter value for owner: test                                              |
|			Enter value for col_name: n2     				                         |
-------------------------------------------------------------------------------------|*/
col object_name   	    format a20
col column_name  		format a12
col last_analyzed 		format a20
col prev_last_analyzed  format a20
col histogram           format a16
col prev_histogram      format a16
WITH sq AS 
    (
     SELECT
	      object_id
		 ,object_name
		 ,subobject_name
	 FROM
	     dba_objects
	 WHERE
	     object_name    = upper ('&&table_name')
	 AND owner          = upper('&&owner')
	 AND subobject_name IS NULL
	 )
SELECT
	 object_name
	,column_name
	,lead(prev_histogram,1,histogram) over (order by last_analyzed) histogram
	,last_analyzed
	,prev_histogram
	,prev_last_analyzed
FROM
   (
     SELECT
	     object_name
		,column_name
		,(select histogram from all_tab_col_statistics where owner = upper('&&owner') 
		  and table_name = upper('&&table_name') and column_name = upper('&&col_name')) histogram
		,last_analyzed
		,stat_time prev_last_analyzed
		,row_number() over (order by last_analyzed) rn
		,case when round(derivedDensity,9)= round(density,9) then 'NONE' else 'HISTOGRAM' end prev_histogram
	 FROM
	    (
		 SELECT
		     object_name
			,column_name
			,to_char(savtime ,'dd/mm/yyyy hh24:mi:ss')     last_analyzed
			,to_char(timestamp# ,'dd/mm/yyyy hh24:mi:ss') stat_time
			,density
			,1/distcnt derivedDensity
			,row_number() over (order by savtime) rn
			,lag(case when round(1/distcnt,9) = round(density,9) then 'NONE' else 'HISTOGRAM' end) over(order by savtime) hist_histogram
		 FROM
		    sys.WRI$_OPTSTAT_HISTHEAD_HISTORY
			INNER JOIN sq ON object_id = obj#
			INNER JOIN (SELECT 
			                column_id
						   ,column_name
						FROM
						    dba_tab_columns
						WHERE
						    column_name = upper('&&col_name')
						AND table_name  = upper('&&table_name') 
			            AND owner       = upper('&&owner')
						) ON intcol# = column_id
	)
WHERE
   rn >= 1 --exlcude/include the very first dbms_stat
   )
ORDER BY
    last_analyzed;

And here’s below a simple illustration usage:

SQL> exec dbms_stats.gather_table_stats(user, 't1', method_opt => 'for all columns size auto');
SQL> exec dbms_stats.gather_table_stats(user, 't1', method_opt => 'for all columns size 1');
SQL> exec dbms_stats.gather_table_stats(user, 't1', method_opt => 'for all columns size 3');

SQL> @h_hist1
Enter value for table_name: t1
Enter value for owner: c##mhouri
Enter value for col_name: n2

OBJECT_NAME  COLUMN_NAME  HISTOGRAM        LAST_ANALYZED        PREV_HISTOGRAM   PREV_LAST_ANALYZED
------------ ------------ ---------------- -------------------- ---------------- --------------------
T1           N2           HISTOGRAM        06/08/2020 15:39:00  HISTOGRAM        25/06/2020 18:42:06
T1           N2           NONE             06/08/2020 15:43:58  HISTOGRAM        06/08/2020 15:38:59
T1           N2           TOP-FREQUENCY    06/08/2020 16:03:19  NONE             06/08/2020 15:43:58

As you can see we went from HISTOGRAM on August the 6th at 15:39 to NONE at 15:43 then to TOP-FREQUENCY at 16:03. If I gather again statistics on the same table then here’s what the h_hist1 script will show:

SQL> exec dbms_stats.gather_table_stats(user, 't1', method_opt => 'for all columns size auto');
SQL> @h_hist1

OBJECT_NAME COLUMN_NAME  HISTOGRAM        LAST_ANALYZED        PREV_HISTOGRAM   PREV_LAST_ANALYZED
----------- ------------ ---------------- -------------------- ---------------- --------------------
T1          N2           HISTOGRAM        06/08/2020 15:39:00  HISTOGRAM        25/06/2020 18:42:06
T1          N2           NONE             06/08/2020 15:43:58  HISTOGRAM        06/08/2020 15:38:59
T1          N2           HISTOGRAM        06/08/2020 16:03:19  NONE             06/08/2020 15:43:58
T1          N2           FREQUENCY        06/08/2020 16:23:00  HISTOGRAM        06/08/2020 16:03:18

While I lost the histogram type (TOP-FREQENCY) as I go back through the historical path of the n2 column, I am, however, able to say whether, previously, this column was having a histogram or not.

When you Google using the words “historical histogram Oracle”, the first hit is a blog post by Nigel Bayliss: How do I get histogram history? The script shared by Nigel has been named h_hist. Here’s below the output of this script when applied to the current t1/n2 case:

SQL> @h_hist t1 c##mhouri
Table : C##MHOURI.T1

Column: N2                                         Last analyzed: 2020-06-08 16:22:59 [Current: FREQUENCY]
-     2020-06-08 15:39:00     0 ->     5 buckets CHANGE
-     2020-06-08 15:43:58     5 ->     0 buckets CHANGE
-     2020-06-08 16:03:19     0 ->     3 buckets CHANGE
-     2020-06-08 16:22:59     3 ->     5 buckets CHANGE

PL/SQL procedure successfully completed.

It coincides perfectly with my script but, for practical reasons, I do prefer using my script 🙂

2. When you might need this script?

Here’s an example to illustrate the utility of the h_hist1 script

SQL> select count(1) from t where n2= 100;

  COUNT(1)
----------
       100

SQL_ID  2k5g2apy78hj8, child number 1
-------------------------------------
Plan hash value: 1882749816
------------------------------------------------------
| Id  | Operation         | Name     | Rows  | Bytes |
------------------------------------------------------
|   0 | SELECT STATEMENT  |          |       |       |
|   1 |  SORT AGGREGATE   |          |     1 |     3 |
|*  2 |   INDEX RANGE SCAN| T_ACS_I1 |   100 |   300 |
------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("N2"=:SYS_B_1)

SQL> select count(1) from t where n2= 1e6;

  COUNT(1)
----------
   1099049

SQL_ID  2k5g2apy78hj8, child number 2
-------------------------------------
Plan hash value: 2966233522
---------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |
---------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |
|   1 |  SORT AGGREGATE    |      |     1 |     3 |
|*  2 |   TABLE ACCESS FULL| T    |  1099K|  3219K|
---------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("N2"=:SYS_B_1)

We have a query switching from a FULL TABLE SCAN to an INDEX RANGE SCAN depending on the number of rows processed by each bind variable value. So far so good until you are asked to trouble shoot a performance issue caused by this query since it is sticking into an INDEX RANGE SCAN execution plan whatever the bind variable value is

SQL> select count(1) from t where n2= 1e6;

  COUNT(1)
----------
   1099049

------------------------------------------------------
| Id  | Operation         | Name     | Rows  | Bytes |
------------------------------------------------------
|   0 | SELECT STATEMENT  |          |       |       |
|   1 |  SORT AGGREGATE   |          |     1 |     3 |
|*  2 |   INDEX RANGE SCAN| T_ACS_I1 |   240K|   703K|
------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("N2"=:SYS_B_1)

This performance pain occurred because someone somewhere has deleted the histogram of column n2 from table t as the following proves:

SQL> @h_hist1

OBJECT_NAME COLUMN_NAME  HISTOGRAM        LAST_ANALYZED        PREV_HISTOGRAM   PREV_LAST_ANALYZED
----------- ------------ ---------------- -------------------- ---------------- --------------------
T           N2           HISTOGRAM        06/08/2020 17:10:44  HISTOGRAM
T           N2           NONE             06/08/2020 17:32:45  HISTOGRAM        06/08/2020 17:10:43 

Thanks to the FREQUENCY histogram on the column n2, the previous cursor was bind sensitive and bind aware allowing, as such, the query to have the best execution plan possible per regards to the bind variable value. But as soon as the n2 column has lost its histogram and the cursor has been flushed from memory, the new cursor was neither bind sensitive nor bind aware explaining why the corresponding query was sticking in the INDEX RANGE SCAN execution plan.

select
   dbms_stats.report_stats_operations(since => sysdate -1) text_line
from dual;

----------------------------------------------------------------------------------------
| Operation Id | Operation              | Target              | Start Time             |
----------------------------------------------------------------------------------------
| 5219         | gather_table_stats     | "C##MHOURI"."T"     | 06/08/20               |
|              |                        |                     | 17:32:45,333000 +02:00 |
----------------------------------------------------------------------------------------

select
   dbms_stats.report_single_stats_operation
      (opid         => 5219
      ,detail_level => 'ALL')
from dual;

---------------------------------------------------------------------------------------------
| Operation | Operation          | Target          | Session | Additional Info              |
| Id        |                    |                 | Id      |                              |
---------------------------------------------------------------------------------------------
| 5219      | gather_table_stats | "C##MHOURI"."T" | 395     | Parameters: [block_sample:   |
|           |                    |                 |         | FALSE] [cascade: NULL]       |
|           |                    |                 |         | [concurrent: FALSE] [degree: |
|           |                    |                 |         | NULL] [estimate_percent:     |
|           |                    |                 |         | DBMS_STATS.AUTO_SAMPLE_SIZE] |
|           |                    |                 |         | [force: FALSE] [granularity: |
|           |                    |                 |         | AUTO] [method_opt: for all   |
|           |                    |                 |         | columns size 1]              |
|           |                    |                 |         | [no_invalidate: NULL]        |
|           |                    |                 |         | [ownname: C##MHOURI]         |
|           |                    |                 |         | [partname: ]                 |
|           |                    |                 |         | [reporting_mode: FALSE]      |
|           |                    |                 |         | [statid: ] [statown: ]       |
|           |                    |                 |         | [stattab: ] [stattype: DATA] |
|           |                    |                 |         | [tabname: t]                 |
---------------------------------------------------------------------------------------------

Spot how the last dbms_stat run (operation id n°5219) did not gather histogram because it used:

[method_opt: for all columns size 1] 

3. Conclusion

Tracking column histogram modifications can be very handy in certain circumstances. The h_hist1 script provided in this blog post could help in such case of situations

August 5, 2020

On the impact of stale statistics when enabling Foreign keys

Filed under: Oracle,Statistics — hourim @ 12:55 pm

If you are going to enable a foreign key integrity constraint, then make sure you have fresh and representative statistics on both child and parent table ends. If this is not so, then you might lose a lot of time enabling your constraint because of a non-easy to spot recursive query.

Here’s below a summary of a real life 11.2.0.4 case:

SQL Text
------------------------------
BEGIN XZA_DDL.ENABLE_FK(owner => 'ABCDXY', nomtab => 'TABLE_ABC_C', nomfk => NULL); END;

Global Information
------------------------------
 Status              :  EXECUTING           
 Instance ID         :  1                   
 Session             :  XYZ_ABC(13:25335)  
 SQL ID              :  46bn1bvfkkpvn       
 SQL Execution ID    :  16777216            
 Execution Started   :  01/13/2020 16:54:37 
 First Refresh Time  :  01/13/2020 16:54:43 
 Last Refresh Time   :  01/14/2020 15:23:35 
 Duration            :  80939s              
 Module/Action       :  SQL Developer/-     
 Service             :  SYS$USERS           
 Program             :  SQL Developer       

Global Stats
===========================================================================================
| Elapsed |   Cpu   |    IO    | Concurrency | PL/SQL  |  Other   | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) |  Waits(s)   | Time(s) | Waits(s) |  Gets  | Reqs | Bytes |
===========================================================================================
|   80937 |   79809 |     3.27 |        1.74 |    0.00 |     1124 |     2G |  215 |  21MB |
===========================================================================================

The very long enabling FK process was still in EXECUTING status after more than 22 hours as shown above. Let’s check what ASH can tell us about this situation:

SQL> select sql_id, count(1)
  2  from gv$active_session_history
  3  where
  4      sample_time between to_date('14012020 12:00:00', 'ddmmyyyy hh24:mi:ss')
  5                  and     to_date('14012020 15:45:00', 'ddmmyyyy hh24:mi:ss')
  6  group by sql_id
  7  order by 2 desc
  8  ;

SQL_ID          COUNT(1)
------------- ----------
                   17801
608srf0vf5f3q      12937 -->  alter table ABCDXY. TABLE_ABC_C
a8gtvr24afy70      10919
1vtd595xys9fp       9135
7q4kq2auz89x1       4203
fdxa2ph5250x1       3058

Manifestly it seems that we must focus our attention on the 608srf0vf5f3q SQL_ID.

From a wait event point of view, ASH shows this:

SQL> select event, count(1)
  2  from gv$active_session_history
  3  where
  4      sample_time between to_date('14012020 12:00:00', 'ddmmyyyy hh24:mi:ss')
  5                  and     to_date('14012020 15:45:00', 'ddmmyyyy hh24:mi:ss')
  6  group by event
  7  order by 2 desc;

EVENT                      COUNT(1)
------------------------ ----------
                              48756
row cache lock                12194
db file parallel write         6867
db file scattered read         6519
db file sequential read        5356
direct path read               1794
…/../

A lot of CPU burning followed by the row cache lock wait event. However, the SQL_ID 608srf0vf5f3q is reported to be only burning CPU as the followings proves:

SQL> select event, count(1)
  2  from gv$active_session_history
  3  where
  4      sample_time between to_date('14012020 12:00:00', 'ddmmyyyy hh24:mi:ss')
  5                  and     to_date('14012020 15:45:00', 'ddmmyyyy hh24:mi:ss')
  6  and sql_id = '608srf0vf5f3q'
  7  group by event
  8  order by 2 desc;

EVENT     COUNT(1)
------- ----------
             12937

Since the enabling process was still running I decided to “snap’’ the SID (13) of the PL/SQL stored package which I got from the corresponding gv$sql_monitor

SQL> @snapper all 5 1 13
Sampling SID 13 with interval 5 seconds, taking 1 snapshots...

-- Session Snapper v4.26 - by Tanel Poder ( http://blog.tanelpoder.com/snapper ) 
- Enjoy the Most Advanced Oracle Troubleshooting Script on the Planet! 

------------------------------------------------------------------------------------------------
SID, USERNAME, TYPE, STATISTIC                            ,         DELTA, HDELTA/SEC,    %TIME,
------------------------------------------------------------------------------------------------
 13, XYZ_ABC, STAT, session logical reads                ,        142350,     27.03k,         ,
 13, XYZ_ABC, STAT, consistent gets                      ,        142351,     27.03k,         ,
 13, XYZ_ABC, STAT, consistent gets from cache           ,        142352,     27.03k,         ,
 13, XYZ_ABC, STAT, consistent gets from cache (fastpath),        142352,     27.03k,         ,
 13, XYZ_ABC, STAT, logical read bytes from cache        ,    1166147584,    221.41M,         , --> this looks interesting
 13, XYZ_ABC, STAT, calls to kcmgcs                      ,            22,       4.18,         ,
 13, XYZ_ABC, STAT, no work - consistent read gets       ,        142334,     27.02k,         ,
 13, XYZ_ABC, STAT, table scans (short tables)           ,             3,        .57,         ,
 13, XYZ_ABC, STAT, table scan rows gotten               ,      23484525,      4.46M,         ,
 13, XYZ_ABC, STAT, table scan blocks gotten             ,        142335,     27.02k,         ,
 13, XYZ_ABC, STAT, buffer is pinned count               ,             3,        .57,         ,
 13, XYZ_ABC, TIME, DB CPU                               ,       6003174,      1.14s,   114.0%,
 13, XYZ_ABC, TIME, sql execute elapsed time             ,       6003624,      1.14s,   114.0%,
 13, XYZ_ABC, TIME, DB time                              ,       6003624,      1.14s,   114.0%,

--  End of Stats snap 1, end=2020-01-14 15:30:20, seconds=5.3

----------------------------------------------------------------------------------
  ActSes   %Thread | INST | SQL_ID          | SQL_CHILD | EVENT  | WAIT_CLASS
----------------------------------------------------------------------------------
    1.00    (100%) |    1 | 608srf0vf5f3q   | 0         | ON CPU | ON CPU

--  End of ASH snap 1, end=2020-01-14 15:30:20, seconds=5, samples_taken=48, AAS=1

Nonchalantly, all the investigations are showing that the culprit query is 608srf0vf5f3q which is burning a lot of CPU. The 221 M of logical read bytes from cache could very well correlate with high CPU consumption. But we still don’t know why and who is reading those hundreds of millions of logical reads from cache?

In addition, the ash_wait_chain script applied to this SQL_ID 608srf0vf5f3q shows that this query is not blocked by any other process:

SQL> @ash_wait_chains event2 sql_id='608srf0vf5f3q' "timestamp'2020-01-14 12:00:00'" "timestamp'2020-01-14 15:45:00'"

-- Display ASH Wait Chain Signatures script v0.4 BETA by Tanel Poder ( http://blog.tanelpoder.com )

%This     SECONDS     AAS WAIT_CHAIN    LAST_SEEN
------ ---------- ------- ------------- --------------------
 100%       12937     1.0 -> ON CPU     2020-01-14 15:44:59

That’s weird. Isn’t it?

After a couple of minutes going round and round in circles, I decided to use the brute force by:

  • Killing the process
  •  Activating the 10046 trace
  •  Running the process for a couple of minutes
  •  Stopping again the process
  •  Analyzing the trace file
alter session set tracefile_identifier = EnablingForeignKey;
@46on 12
exec XZA_DDL.ENABLE_FK(owner => 'ABCDXY', nomtab => 'TABLE_ABC_C', nomfk => NULL);
@46off

And here’s below what I found in the TKPROFED generated trace file

SQL ID: 56tj8jdcw15jb Plan Hash: 2257352843

select /*+ all_rows ordered dynamic_sampling(2) */ A.rowid, :1, :2, :3 
from
 "ABCDXY"."TABLE_ABC_C" A , "ABCDXY"."TABLE_PARENT_ABC" B where( 
  "A"."COL_1" is not null and "A"."COL_2" is not null and "A"."COL_3" 
  is not null and "A"."COL_4" is not null) and( "B"."COL_1" (+)= 
  "A"."COL_1" and "B"."COL_2" (+)= "A"."COL_2" and "B"."COL_3" (+)=
   "A"."COL_3" and "B"."COL_4" (+)= "A"."COL_4") and( "B"."COL_1" is 
  null or "B"."COL_2" is null or "B"."COL_3" is null or "B"."COL_4" is 
  null)


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1    217.83     224.03          0    6179071          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3    217.83     224.03          0    6179071          0           0

This recursive query was enough for me to understand and fix this issue. I’ve then started by checking the historical execution of this recursive query as shown below:

SQL> @SqlHistStat
Enter value for sql_id: 56tj8jdcw15jb
Enter value for from_date: 01012020

SNAP_BEGIN                PLAN_HASH_VALUE    EXECS END_OF_FETCH  AVG_ETIME     AVG_LIO   AVG_ROWS
------------------------- --------------- -------- ------------ ----------  ---------- ----------
13-JAN-20 04.00.44.563 PM      2257352843        1            0        381    10934401          0
13-JAN-20 05.00.56.481 PM      2257352843        0            0       3551   104544327          0
13-JAN-20 06.00.05.981 PM      2257352843        0            0       3609   104844651          0
13-JAN-20 07.00.15.180 PM      2257352843        0            0       3619    84153912          0
13-JAN-20 08.00.29.468 PM      2257352843        0            0       3610   104607796          0
13-JAN-20 09.00.38.801 PM      2257352843        0            0       3619    93027006          0
13-JAN-20 10.00.49.488 PM      2257352843        0            0       3595   102089647          0
13-JAN-20 11.00.59.446 PM      2257352843        0            0       3545    99557582          0
14-JAN-20 12.00.09.039 AM      2257352843        0            0       3609   104246681          0
14-JAN-20 01.00.18.823 AM      2257352843        0            0       3608   106861170          0
14-JAN-20 02.00.28.120 AM      2257352843        0            0       3611   103242627          0
14-JAN-20 03.00.37.711 AM      2257352843        0            0       3610   106157348          0
14-JAN-20 04.00.47.161 AM      2257352843        0            0       3608   103770559          0
14-JAN-20 05.00.56.531 AM      2257352843        0            0       3547   103304809          0
14-JAN-20 06.00.05.487 AM      2257352843        0            0       3609   105243323          0
14-JAN-20 07.00.14.675 AM      2257352843        0            0       3608   107326829          0
14-JAN-20 08.00.23.442 AM      2257352843        0            0       3608   108137854          0
14-JAN-20 09.00.32.133 AM      2257352843        0            0       3609   107183542          0
14-JAN-20 10.00.41.111 AM      2257352843        0            0       3610   104855775          0
14-JAN-20 11.00.50.825 AM      2257352843        0            0       3554   100223228          0
14-JAN-20 12.00.00.751 PM      2257352843        0            0       3610    99540252          0
14-JAN-20 01.00.11.282 PM      2257352843        0            0       3614   101153086          0
14-JAN-20 02.00.21.974 PM      2257352843        0            0       3610    98943412          0
14-JAN-20 03.00.32.525 PM      2257352843        1            0        436    12358214          0

24 rows selected.

As you can see, through the END_OF_FETCH column, this query has spanned 24 one-hour snapshots without completing. It uses the following execution plan:

--------------------------------------------------------------------------------
| Id  | Operation                  | Name              | Rows  | Pstart| Pstop |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                   |       |       |       |
|*  1 |  FILTER                    |                   |       |       |       |
|   2 |   NESTED LOOPS OUTER       |                   |     1 |       |       |
|   3 |    PARTITION RANGE ALL     |                   |     1 |     1 |     2 |
|   4 |     TABLE ACCESS FULL      | TABLE_ABC_C       |     1 |     1 |     2 |
|   5 |    PARTITION RANGE ITERATOR|                   |     1 |   KEY |   KEY |
|*  6 |     TABLE ACCESS FULL      | TABLE_PARENT_ABC  |     1 |   KEY |   KEY |
--------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(("B"."COL_1" IS NULL OR "B"."COL_2" IS NULL OR "B"."COL_3" IS NULL OR
              "B"."COL_4" IS NULL))
   6 - filter(("B"."COL_1"="A"."COL_1" AND "B"."COL_2"="A"."COL_2" AND
               "B"."COL_3"="A"."COL_3" AND "B"."COL_4"="A"."COL_4"))

I don’t know why whenever I have to fix a very long non-ending process, I almost always end up by finding that a NESTED LOOPS is for something in this pain 😊

It is clear now that the TABLE_ABC_C child table and its parent TABLE_PARENT_ABC need to have fresh statistics before relaunching the FK enabling process. I always keep saying that, when an estimated cardinality of 1 is reported in the execution plan, then there is a big change that this is due to a non-fresh or non-representative statistic.

SQL> exec dbms_stats.gather_table_stats ('ABCDXY', 'TABLE_ABC_C', cascade => true,         no_invalidate => false);
SQL> exec dbms_stats.gather_table_stats ('ABCDXY', 'TABLE_PARENT_ABC', cascade => true, no_invalidate => false);

SQL> @HistStat
Enter value for owner: ABCDXY
Enter value for table_name: TABLE_ABC_C

OBJECT_NAME    OBJECT_TYPE  PREV_STAT_TIME           ROWCNT SAMPLESIZE ESTIMATE_PCT
-------------- ------------ -------------------- ---------- ---------- ------------
TABLE_ABC_C  TABLE       
TABLE_ABC_C  TABLE        09/01/2020 16:59:25           0          0            0
TABLE_ABC_C  TABLE        14/01/2020 16:34:17      946798     946798          100
TABLE_ABC_C  TABLE        14/01/2020 17:09:02      946798     946798          100

SQL> @HistStat
Enter value for owner: ABCDXY
Enter value for table_name: TABLE_PARENT_ABC

OBJECT_NAME         OBJECT_TYPE  PREV_STAT_TIME        ROWCNT SAMPLESIZE ESTIMATE_PCT
------------------- ------------ -------------------- ------- ---------- ------------
TABLE_PARENT_ABC   TABLE       
TABLE_PARENT_ABC   TABLE        09/01/2020 16:59:25        0          0            0
TABLE_PARENT_ABC   TABLE        14/01/2020 16:34:43  9032840    9032840          100
TABLE_PARENT_ABC   TABLE        14/01/2020 17:08:52  9032840    9032840          100

And finally, the coast is clear to re-enable the foreign key in about 10 seconds as shown below:

SQL> exec DBA_DDL.ENABLE_FK(owner => 'ABCDXY', nomtab => 'TABLE_ABC_C', nomfk => NULL);

PL/SQL procedure successfully completed.

Elapsed: 00:00:10.29 --> 10 seconds!

This has been possible because the new execution plan switched from that dramatic NL to a more suitable HASH JOIN as shown below:

SQL_ID  56tj8jdcw15jb, child number 0
-------------------------------------
Plan hash value: 1651427782
------------------------------------------------------------------------------------------
| Id  | Operation            | Name              | Rows  | Bytes |TempSpc| Pstart| Pstop |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |                   |       |       |       |       |       |
|   1 |  PARTITION RANGE ALL |                   |     1 |    90 |       |     1 |     3 |
|*  2 |   FILTER             |                   |       |       |       |       |       |
|*  3 |    HASH JOIN OUTER   |                   |     1 |    90 |    18M|       |       |
|   4 |     TABLE ACCESS FULL| TABLE_ABC_C       |   946K|    46M|       |     1 |     3 |
|   5 |     TABLE ACCESS FULL| TABLE_PARENT_ABC  |  9032K|   335M|       |     1 |     3 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("B"."COL_1" IS NULL OR "B"."COL_2" IS NULL OR "B"."COL_3" IS NULL 
              OR "B"."COL_4" IS NULL))
   3 - access("B"."COL_4"="A"."COL_4" AND "B"."COL_1"="A"."COL_1" AND
              "B"."COL_2"="A"."COL_2" AND "B"."COL_3"="A"."COL_3")

SQL> @sqlstats
Enter value for sql_id: 56tj8jdcw15jb

     CHILD PLAN_HASH_VALUE   AVG_GETS   AVG_PIOS  AVG_ETIME     EXECS
---------- --------------- ---------- ---------- ---------- ---------
         0      1651427782     133519     109286  27.018412         1

Bottom Line

Regardless of the kind of Oracle SQL operations you are executing you should always make sure you have given, beforehand, Oracle enough statistics information about your table/index/column. If this is not so, then be sure that you will certainly end up by stopping to be lucky and by facing a performance trouble. And, in the case of enabling Foreign key constraint, this performance pain will be exacerbated by a recursive query involving the parent-child tables. If this recursive query has a wrong execution plan because of lack of statistics, then it will drastically delay the FK enabling execution time and it will be very hard to point out that the root cause of the performance pain is due to that recursive query.

Create a free website or blog at WordPress.com.

Tony's Oracle Tips

Tony Hasler's light hearted approach to learning about Oracle

Richard Foote's Oracle Blog

Focusing Specifically On Oracle Indexes, Database Administration and Some Great Music

Hatem Mahmoud Oracle's blog

Just another Oracle blog : Database topics and techniques

Mohamed Houri’s Oracle Notes

Qui se conçoit bien s’énonce clairement

Oracle Diagnostician

Performance troubleshooting as exact science

Raheel's Blog

Things I have learnt as Oracle DBA

Coskan's Approach to Oracle

What I learned about Oracle

So Many Oracle Manuals, So Little Time

“Books to the ceiling, Books to the sky, My pile of books is a mile high. How I love them! How I need them! I'll have a long beard by the time I read them”—Lobel, Arnold. Whiskers and Rhymes. William Morrow & Co, 1988.

Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Oracle Scratchpad

Just another Oracle weblog

OraStory

Dominic Brooks on Oracle Performance, Tuning, Data Quality & Sensible Design ... (Now with added Sets Appeal)