Mohamed Houri’s Oracle Notes

December 21, 2020

Disk file operations I/O

Filed under: Oracle — hourim @ 9:58 am

When Disk file operations I/O wait event starts showing at the top of the AWR Top 10 Foreground Events by Total Wait Time it can consume a lot of CPU and freeze your database. Since I have gone through that myself a couple of weeks ago, I thought it might be interesting to write a succinct blog article explaining how to find and fix the root cause of this non-common wait event.

When I started investigating the real-life case all that I have on my hand was the p1 and p3 parameters of this wait event:

  •  p1=8 corresponds to wait for miscellaneous io (ftp, block dump, passwd file)
  •  p3=8 corresponds to any other type of file

To say the least I have been unfortunate with this number.

As I was searching online, I came across a forum question where one of the contributors was pointing to an audit file type and operation. And from there I built the following simple model

SQL> show parameter audit

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
audit_file_dest                      string      /u01/app/oracle/product/19c/dbhome_1/rdbms/audit                                                
audit_sys_operations                 boolean     TRUE
audit_syslog_level                   string
audit_trail                          string      OS
unified_audit_common_systemlog       string
unified_audit_sga_queue_size         integer     1048576
unified_audit_systemlog              string
AUDIT ALL BY c##test BY ACCESS;
AUDIT SELECT TABLE, UPDATE TABLE, INSERT TABLE, DELETE TABLE BY c##test BY ACCESS;
AUDIT EXECUTE PROCEDURE BY c##test BY ACCESS;
create table t1 as select rownum n1, mod(rownum,2) n2, lpad('x',10) vc from dual connect by level <=1e5;

create or replace procedure p1
as
   cursor c1 is
 	select *
    from t1;
begin
  for r1 in c1
  loop
    update t1 set vc = 'yz' where n1 = r1.n1;
    insert into t1 values (r1.n1, r1.n2, r1.vc);
    delete from t1 where n1=r1.n1 and n2=r1.n2 and vc =r1.vc;
   end loop;
commit;
end p1;

I’ve set the audit trail parameter at OS level and I managed to audit several DML operations done by the c##test user.

All that I have to do now to reproduce the Disk file operations I/O wait event on the p1 and p3 values set to 8 is to execute the p1 procedure as shown below

  SQL> exec p1();

While the session executing the p1 procedure (sid=205) was running I snapped it to see what it was doing behind the scene

SQL> @snapper all 5 1 205
Sampling SID 205 with interval 5 seconds, taking 1 snapshots...

-- Session Snapper v4.30 - by Tanel Poder ( http://blog.tanelpoder.com/snapper ) 
- Enjoy the Most Advanced Oracle Troubleshooting Script on the Planet! 

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    SID, USERNAME  , TYPE, STATISTIC                            ,         DELTA, HDELTA/SEC,    %TIME, GRAPH       , NUM_WAITS,  WAITS/SEC,   AVERAGES
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    205, C##TEST   , STAT, opened cursors cumulative            ,          1859,     188.24,         ,             ,          ,           ,       1.01 per execution
    205, C##TEST   , STAT, recursive calls                      ,          1866,     188.95,         ,             ,          ,           ,     1.28ms recursive CPU per recursive call
    205, C##TEST   , STAT, recursive cpu usage                  ,           238,       24.1,         ,             ,          ,           ,      2.38s total recursive CPU usage
    205, C##TEST   , STAT, session logical reads                ,        413327,     41.85k,         ,             ,          ,           ,    414.55k total buffer visits
    205, C##TEST   , STAT, CPU used by this session             ,           241,       24.4,         ,             ,          ,           ,      2.41s total CPU used by this session
    205, C##TEST   , STAT, user I/O wait time                   ,             6,        .61,         ,             ,          ,           ,          0 per execution
    205, C##TEST   , STAT, non-idle wait time                   ,             6,        .61,         ,             ,          ,           ,          0 per execution
    205, C##TEST   , STAT, non-idle wait count                  ,          1858,     188.14,         ,             ,          ,           ,       1.01 per execution
    205, C##TEST   , STAT, physical read total IO requests      ,             1,         .1,         ,             ,          ,           ,     16.38k bytes per request
    205, C##TEST   , STAT, physical read total bytes            ,         16384,      1.66k,         ,             ,          ,           ,       8.94 per execution
    205, C##TEST   , STAT, cell physical IO interconnect bytes  ,         16384,      1.66k,         ,             ,          ,           ,       8.94 per execution
    205, C##TEST   , STAT, db block gets                        ,          1911,      193.5,         ,             ,          ,           ,       1.04 per execution
    205, C##TEST   , STAT, db block gets from cache             ,          1911,      193.5,         ,             ,          ,           ,       1.04 per execution
    205, C##TEST   , STAT, db block gets from cache (fastpath)  ,           644,      65.21,         ,             ,          ,           ,        .35 per execution
    205, C##TEST   , STAT, consistent gets                      ,        412820,      41.8k,         ,             ,          ,           ,     225.34 per execution
    205, C##TEST   , STAT, consistent gets from cache           ,        412811,      41.8k,         ,             ,          ,           ,     225.33 per execution
    205, C##TEST   , STAT, consistent gets pin                  ,        412811,      41.8k,         ,             ,          ,           ,     225.33 per execution
    205, C##TEST   , STAT, consistent gets pin (fastpath)       ,        412811,      41.8k,         ,             ,          ,           ,     225.33 per execution
    205, C##TEST   , STAT, consistent gets direct               ,             9,        .91,         ,             ,          ,           ,          0 per execution
    205, C##TEST   , STAT, logical read bytes from cache        ,    3397402624,    344.01M,         ,             ,          ,           ,      1.85M per execution
    205, C##TEST   , STAT, physical reads                       ,             2,         .2,         ,             ,          ,           ,          0 per execution
    205, C##TEST   , STAT, physical reads direct                ,             2,         .2,         ,             ,          ,           ,          0 per execution
    205, C##TEST   , STAT, physical read IO requests            ,             1,         .1,         ,             ,          ,           ,     16.38k bytes per request
    205, C##TEST   , STAT, physical read bytes                  ,         16384,      1.66k,         ,             ,          ,           ,       8.94 per execution
    205, C##TEST   , STAT, db block changes                     ,          3749,     379.61,         ,             ,          ,           ,       2.05 per execution
    205, C##TEST   , STAT, change write time                    ,            49,       4.96,         ,             ,          ,           ,        .03 per execution
    205, C##TEST   , STAT, free buffer requested                ,          1265,     128.09,         ,             ,          ,           ,        .69 per execution
    205, C##TEST   , STAT, dirty buffers inspected              ,            10,       1.01,         ,             ,          ,           ,        .01 per execution
    205, C##TEST   , STAT, pinned buffers inspected             ,             1,         .1,         ,             ,          ,           ,  .55*10^-3 per execution
    205, C##TEST   , STAT, hot buffers moved to head of LRU     ,            65,       6.58,         ,             ,          ,           ,        .04 per execution
    205, C##TEST   , STAT, free buffer inspected                ,           123,      12.45,         ,             ,          ,           ,        .07 per execution
    205, C##TEST   , STAT, switch current to new buffer         ,          1240,     125.56,         ,             ,          ,           ,        .68 per execution
    205, C##TEST   , STAT, switch current caused by our pin     ,          1240,     125.56,         ,             ,          ,           ,        .68 per execution
    205, C##TEST   , STAT, shared hash latch upgrades - no wait ,           726,      73.51,         ,             ,          ,           ,         .4 per execution
    205, C##TEST   , STAT, calls to kcmgcs                      ,          3714,     376.07,         ,             ,          ,           ,       2.03 per execution
    205, C##TEST   , STAT, calls to kcmgas                      ,             4,        .41,         ,             ,          ,           ,          0 per execution
    205, C##TEST   , STAT, calls to get snapshot scn: kcmgss    ,          1855,     187.83,         ,             ,          ,           ,       1.01 per execution
    205, C##TEST   , STAT, redo entries                         ,          1856,     187.93,         ,             ,          ,           ,       1.01 per execution
    205, C##TEST   , STAT, redo size                            ,        521376,     52.79k,         ,             ,          ,           ,          ~ bytes per user commit
    205, C##TEST   , STAT, redo ordering marks                  ,             4,        .41,         ,             ,          ,           ,          0 per execution
    205, C##TEST   , STAT, file io wait time                    ,         22467,      2.27k,         ,             ,          ,           ,    22.47ms bad guess of IO wait time per IO request
    205, C##TEST   , STAT, Number of read IOs issued            ,             1,         .1,         ,             ,          ,           ,  .55*10^-3 per execution
    205, C##TEST   , STAT, undo change vector size              ,        182320,     18.46k,         ,             ,          ,           ,      99.52 per execution
    205, C##TEST   , STAT, no work - consistent read gets       ,        406064,     41.12k,         ,             ,          ,           ,     221.65 per execution
    205, C##TEST   , STAT, table scans (long tables)            ,          1231,     124.65,         ,             ,          ,           ,        .67 per execution
    205, C##TEST   , STAT, table scan rows gotten               ,     123963294,     12.55M,         ,             ,          ,           ,     67.67k per execution
    205, C##TEST   , STAT, table scan disk non-IMC rows gotten  ,     123898306,     12.55M,         ,             ,          ,           ,     67.63k per execution
    205, C##TEST   , STAT, table scan blocks gotten             ,        404643,     40.97k,         ,             ,          ,           ,     220.88 per execution
    205, C##TEST   , STAT, heap block compress                  ,             6,        .61,         ,             ,          ,           ,          0 per execution
    205, C##TEST   , STAT, HSC Heap Segment Block Changes       ,          1853,     187.63,         ,             ,          ,           ,       1.01 per execution
    205, C##TEST   , STAT, session cursor cache hits            ,          1840,     186.31,         ,             ,          ,           ,      1.84k softparses avoided thanks to cursor cache
    205, C##TEST   , STAT, buffer is pinned count               ,          1222,     123.74,         ,             ,          ,           ,        .29 % buffer gets avoided thanks to buffer pin caching
    205, C##TEST   , STAT, buffer is not pinned count           ,          2446,     247.68,         ,             ,          ,           ,       1.34 per execution
    205, C##TEST   , STAT, execute count                        ,          1832,      185.5,         ,             ,          ,           ,          ~ executions per parse
    205, C##TEST   , TIME, PL/SQL execution elapsed time        ,         11791,     1.19ms,      .1%, [          ],          ,           ,
    205, C##TEST   , TIME, DB CPU                               ,       2460784,   249.17ms,    24.9%, [@@@       ],          ,           ,
    205, C##TEST   , TIME, sql execute elapsed time             ,       8059797,   816.12ms,    81.6%, [######### ],          ,           ,
    205, C##TEST   , TIME, DB time                              ,       8059797,   816.12ms,    81.6%, [######### ],          ,           ,      18.39 % unaccounted time
    205, C##TEST   , WAIT, Disk file operations I/O             ,         46307,     4.69ms,      .5%, [          ],      1828,      185.1,    25.33us average wait ----------> here
    205, C##TEST   , WAIT, ASM IO for non-blocking poll         ,             8,      .81us,      .0%, [          ],         1,         .1,        8us average wait

--  End of Stats snap 1, end=2020-12-20 09:02:36, seconds=9.9

------------------------------------------------------------------------------------------
  ActSes   %Thread | INST | SQL_ID          | SQL_CHILD | EVENT          | WAIT_CLASS
------------------------------------------------------------------------------------------
     .48     (48%) |    1 | fnbt9ts7bsqzn   | 0         | ON CPU         | ON CPU
     .48     (48%) |    1 | 7wv0zz7965u54   | 0         | ON CPU         | ON CPU
     .03      (3%) |    1 | 1nf0v48kt3a30   | 0         | ON CPU         | ON CPU
     .03      (3%) |    1 | 8myfvr1fb8gju   | 0         | ON CPU         | ON CPU

--  End of ASH snap 1, end=2020-12-20 09:02:36, seconds=5, samples_taken=40, AAS=1

SQL> @snapper ash 5 1 205
Sampling SID 205 with interval 5 seconds, taking 1 snapshots...

-- Session Snapper v4.30 - by Tanel Poder ( http://blog.tanelpoder.com/snapper ) 
- Enjoy the Most Advanced Oracle Troubleshooting Script on the Planet!
----------------------------------------------------------------------------------------------------
  ActSes   %Thread | INST | SQL_ID          | SQL_CHILD | EVENT                    | WAIT_CLASS
----------------------------------------------------------------------------------------------------
     .46     (46%) |    1 | fnbt9ts7bsqzn   | 0         | ON CPU                   | ON CPU
     .46     (46%) |    1 | 7wv0zz7965u54   | 0         | ON CPU                   | ON CPU
     .03      (3%) |    1 | 1nf0v48kt3a30   | 0         | ON CPU                   | ON CPU
     .03      (3%) |    1 | fnbt9ts7bsqzn   | 0         | Disk file operations I/O | User I/O ----------> here
     .03      (3%) |    1 | 8myfvr1fb8gju   | 0         | ON CPU                   | ON CPU

As you can see there is a certain proportion of time spent waiting on Disk file operations I/O. And, naturally, the p1 and p3 values, in this case, are respectively 8 and 8 as the followings prove:

SQL>  @ashtopSID
Enter value for sid: 205
Enter value for from: 12-20 09:00:00
Enter value for to: 12-20 09:15:00

  Total                                                                                                                                                                        
  Seconds     AAS %This   SQL_ID        SQL_OPNAME      TOP_LEVEL_SQL EVENT2                     P1TEXT           P1 P3TEXT        P3 
--------- ------- ------- ------------- --------------- ------------- -------------------------- --------------- --- ----------- ---- 
      309      .3   45% | 7wv0zz7965u54 DELETE          8myfvr1fb8gju ON CPU                     FileOperation     8 filetype       8 
      306      .3   44% | fnbt9ts7bsqzn UPDATE          8myfvr1fb8gju ON CPU                     FileOperation     8 filetype       8 
       24      .0    3% | 1nf0v48kt3a30 INSERT          8myfvr1fb8gju ON CPU                     FileOperation     8 filetype       8 
       13      .0    2% | 7wv0zz7965u54 DELETE          8myfvr1fb8gju db file sequential read    file#             1 blocks         1 
        8      .0    1% | fnbt9ts7bsqzn UPDATE          8myfvr1fb8gju db file sequential read    file#             1 blocks         1 
        7      .0    1% | 7wv0zz7965u54 DELETE          8myfvr1fb8gju Disk file operations I/O   FileOperation     8 filetype       8 ----------> here
        6      .0    1% | fnbt9ts7bsqzn UPDATE          8myfvr1fb8gju Disk file operations I/O   FileOperation     8 filetype       8 ----------> here
        5      .0    1% | 1nf0v48kt3a30 INSERT          8myfvr1fb8gju Disk file operations I/O   FileOperation     8 filetype       8 ----------> here
        3      .0    0% | 7wv0zz7965u54 DELETE          8myfvr1fb8gju db file scattered read     file#             1 blocks         3 
        3      .0    0% |                               8myfvr1fb8gju ON CPU                     FileOperation     8 filetype       8 
        2      .0    0% | 0jjc60pmrntdv SELECT          8myfvr1fb8gju ON CPU                     FileOperation     8 filetype       8 
        1      .0    0% | 1nf0v48kt3a30 INSERT          8myfvr1fb8gju ON CPU                     file#             1 blocks         1 
        1      .0    0% | 1nf0v48kt3a30 INSERT          8myfvr1fb8gju db file sequential read    file#             5 blocks         1 
        1      .0    0% | 7wv0zz7965u54 DELETE          8myfvr1fb8gju resmgr:cpu quantum         location          3                0 
        1      .0    0% | 8myfvr1fb8gju PL/SQL EXECUTE  8myfvr1fb8gju ON CPU                     FileOperation     8 filetype       8

Interestingly ASH shows that the CPU consumed by Oracle is also related to the manipulation of files of type 8. I don’t know really how to correctly interpret these p1 and p3 values in the context of CPU


select decode(event,null, 'on cpu', event), p1,p3, count(1)
    from dba_hist_active_sess_history
    where    sample_time between to_date('20122020 09:00:00', 'ddmmyyyy hh24:mi:ss')
                and     to_date('20122020 09:15:00', 'ddmmyyyy hh24:mi:ss')
group by decode(event,null, 'on cpu', event), p1,p3 
order by 4 desc;

DECODE(EVENT,NULL,'ONCPU',EVENT)          P1         P3   COUNT(1)
--------------------------------  ---------- ---------- ----------
on cpu                                     8          8         64
db file sequential read                    1          1         10
on cpu                                   100          0          6
Disk file operations I/O                   8          8          4
on cpu                                     1          1          4
db file scattered read                     3          3          3
on cpu                                     0          0          3
control file sequential read               0          1          3
on cpu                                     3          1          2
ASM file metadata operation                0          0          2
on cpu                               2490368          0          1
db file parallel write                     1 2147483647          1
db file async I/O submit                 106          0          1
on cpu                                     3          0          1
on cpu                                     2          1          1
on cpu                                131072          0          1
on cpu                                     5          1          1
on cpu                                 65536          0          1
oracle thread bootstrap              5910576          0          1
on cpu                                262144          0          1

Summary

Whenever you see a Disk file operations I/O wait event consuming a lot of time and CPU then get the p1,p2,p3 values from ASH or AWR


SQL> @sed "Disk file operations"
Show wait event descriptions matching %Disk file operations%..

EVENT_NAME                     WAIT_CLASS           PARAMETER1           PARAMETER2           PARAMETER3
------------------------------ -------------------- -------------------- -------------------- --------------------
Disk file operations I/O       User I/O             FileOperation        fileno               filetype

If p1 and p3 values equal to 8 and 8 respectively then check if you are, inadvertently, auditing too many repetitive actions by means of the following select


SQL>  select user_name, audit_option from DBA_STMT_AUDIT_OPTS where user_name='C##TEST';

USER_NAME                 AUDIT_OPTION
------------------------- ----------------------------------------
C##TEST                   ALTER SYSTEM
C##TEST                   SYSTEM AUDIT
C##TEST                   CREATE SESSION
C##TEST                   TABLE
C##TEST                   CLUSTER
C##TEST                   TABLESPACE
C##TEST                   USER
C##TEST                   ROLLBACK SEGMENT
C##TEST                   TYPE
C##TEST                   SYNONYM
C##TEST                   PUBLIC SYNONYM
C##TEST                   VIEW
C##TEST                   SEQUENCE
C##TEST                   DATABASE LINK
C##TEST                   PUBLIC DATABASE LINK
C##TEST                   ROLE
C##TEST                   DIMENSION
C##TEST                   PROCEDURE
C##TEST                   TRIGGER
C##TEST                   PROFILE
C##TEST                   DIRECTORY
C##TEST                   MATERIALIZED VIEW
C##TEST                   MINING MODEL
C##TEST                   INDEX
C##TEST                   SELECT TABLE
C##TEST                   INSERT TABLE
C##TEST                   UPDATE TABLE
C##TEST                   DELETE TABLE
C##TEST                   NOT EXISTS
C##TEST                   SYSTEM GRANT
C##TEST                   EXECUTE PROCEDURE
C##TEST                   CONTEXT
C##TEST                   SQL TRANSLATION PROFILE
C##TEST                   LOCKDOWN PROFILE
C##TEST                   ATTRIBUTE DIMENSION
C##TEST                   HIERARCHY
C##TEST                   ANALYTIC VIEW
C##TEST                   READ DIRECTORY
C##TEST                   WRITE DIRECTORY
C##TEST                   EXECUTE DIRECTORY 

If so then reduce the number of audited actions if you want to reduce the impact of this wait event.
In passing, I think that this wait event would not have shown up at the TOP if I have set the audit trail parameter value to DB. I left this not-demonstrated though.

November 29, 2020

OR-Expansion by-passed: top query block of a DML

Filed under: Oracle — hourim @ 4:48 pm

Here’s an interesting OR-Expansion limit which reared its head a little bit after a 12cR1-to-19c Oracle upgrade. It happens during a Create Table As Select (CTAS) which was using a CONCATENATION operation under 12cR1 completing in a couple of seconds. However, under 19c version the same CTAS uses a dramatic NESTED LOOPS to the detriment of the more powerful OR-Expansion and took more than 6 hours.

Here’re below the 12cR1 and 19c execution plan respectively

------------------------------------------------------------------------
| Id  | Operation                        | Name       | Rows  |Time     |
------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT           |            |       |         |
|   1 |  LOAD AS SELECT                  | TARGET_T   |       |         |
|   2 |   OPTIMIZER STATISTICS GATHERING |            | 71227 |00:00:05 |
|   3 |    HASH UNIQUE                   |            | 71227 |00:00:05 |
|   4 |     CONCATENATION                |            |       |         |
|   5 |      HASH JOIN                   |            | 38426 |00:00:03 |
|   6 |       TABLE ACCESS STORAGE FULL  | SOURCE_T2  | 48067 |00:00:01 |
|   7 |       TABLE ACCESS STORAGE FULL  | SOURCE_T1  |    16M|00:00:03 |
|   8 |      HASH JOIN                   |            | 32801 |00:00:03 |
|   9 |       TABLE ACCESS STORAGE FULL  | SOURCE_T2  | 48067 |00:00:01 |
|  10 |       TABLE ACCESS STORAGE FULL  | SOURCE_T1  |    11M|00:00:03 |
-------------------------------------------------------------------------

-------------------------------------------------------------------------
| Id  | Operation                        | Name       | Rows  | Time     |
-------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT           |            |       |          |
|   1 |  LOAD AS SELECT                  | TARGET_T   |       |          |
|   2 |   OPTIMIZER STATISTICS GATHERING |            | 71227 | 08:53:39 |
|   3 |    HASH UNIQUE                   |            | 71227 | 08:53:39 |
|   4 |     NESTED LOOPS                 |            | 71227 | 08:53:39 |
|   5 |      TABLE ACCESS STORAGE FULL   | SOURCE_T1  |    24M| 00:00:03 |
|   6 |      TABLE ACCESS STORAGE FULL   | SOURCE_T2  |     1 | 00:00:01 |
-------------------------------------------------------------------------

There is a simple root cause for this performance deterioration: OR-Expansion can’t be used by Oracle during a DDL operation.

You can locate the root cause of this transformation limit by using the or_expand hint and observing the generated Hint Report at the bottom of the corresponding execution plan:

create table t1 (n1 number, n2 number, c1 varchar2(10));
create index t1_idx1 on t1(n1, n2);
create index t1_idx2 on t1(n2);

create table t2 as
select
   rownum id
  ,trunc((rownum-1/6)) n2
  ,trunc((rownum-1/8)) n3
  ,mod(rownum,5) n4
  ,lpad('z', 4) vc
from
 dual
connect by level <=1e3;

  
exec dbms_stats.gather_table_stats(user, 't1')
exec dbms_stats.gather_table_stats(user, 't2');
SQL>  explain plan for
	create table t1_bis
	 as
	   select /*+ or_expand */
		t1.*
	   from
		t1, t2
	   where
		t1.n1 = t2.id
	  or
	  t1.n1 = t2.n2;

Explained.

-----------------------------------------------------------
| Id  | Operation                        | Name   | Rows  |
-----------------------------------------------------------
|   0 | CREATE TABLE STATEMENT           |        |     1 |
|   1 |  LOAD AS SELECT                  | T1_BIS |       |
|   2 |   OPTIMIZER STATISTICS GATHERING |        |     1 |
|   3 |    NESTED LOOPS                  |        |     1 |
|   4 |     TABLE ACCESS FULL            | T1     |     1 |
|*  5 |     TABLE ACCESS FULL            | T2     |     1 |
-----------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - filter("T1"."N1"="T2"."ID" OR "T1"."N1"="T2"."N2")

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------
   1 -  SEL$1
         U -  or_expand / Top query block of a DML  -------> Note the reason for not using the hint

You can also get the reason for which Oracle by-passed the or-expansion for the CTAS case via the corresponding 10053

**************************
Query transformations (QT)
**************************
TE: Checking validity of table expansion for query block SEL$1 (#0)
TE: Bypassed: Top query block of a DML.
ORE: Checking validity of OR Expansion for query block SEL$1 (#0)
ORE: Predicate chain before QB validity check - SEL$1
"T1"."N1"="T2"."ID" OR "T1"."N1"="T2"."N2"
ORE: Predicate chain after QB validity check - SEL$1
"T1"."N1"="T2"."ID" OR "T1"."N1"="T2"."N2"
ORE: bypassed - Top query block of a DML. ---> Note the by-pass reason

While the OR-expansion transformation is by passed during the CTAS it is, however, used for the select part in 19c:

explain plan for
   select /*+ or_expand */
       t1.*
  from
    t1, t2
  where
    t1.n1 = t2.id
  or
    t1.n1 = t2.n2;

Explained.

--------------------------------------------------------
| Id  | Operation            | Name            | Rows  |
--------------------------------------------------------
|   0 | SELECT STATEMENT     |                 |     2 |
|   1 |  VIEW                | VW_ORE_F79C84EE |     2 |
|   2 |   UNION-ALL          |                 |       |
|*  3 |    HASH JOIN         |                 |     1 |
|   4 |     TABLE ACCESS FULL| T1              |     1 |
|   5 |     TABLE ACCESS FULL| T2              |  1000 |
|*  6 |    HASH JOIN         |                 |     1 |
|   7 |     TABLE ACCESS FULL| T1              |     1 |
|   8 |     TABLE ACCESS FULL| T2              |  1000 |
--------------------------------------------------------
/*+
      BEGIN_OUTLINE_DATA
      USE_HASH(@"SET$2A13AF86_2" "T2"@"SET$2A13AF86_2")
      LEADING(@"SET$2A13AF86_2" "T1"@"SET$2A13AF86_2" "T2"@"SET$2A13AF86_2")
      FULL(@"SET$2A13AF86_2" "T2"@"SET$2A13AF86_2")
      FULL(@"SET$2A13AF86_2" "T1"@"SET$2A13AF86_2")
      USE_HASH(@"SET$2A13AF86_1" "T2"@"SET$2A13AF86_1")
      LEADING(@"SET$2A13AF86_1" "T1"@"SET$2A13AF86_1" "T2"@"SET$2A13AF86_1")
      FULL(@"SET$2A13AF86_1" "T2"@"SET$2A13AF86_1")
      FULL(@"SET$2A13AF86_1" "T1"@"SET$2A13AF86_1")
      NO_ACCESS(@"SEL$9162BF3C" "VW_ORE_F79C84EE"@"SEL$F79C84EE")
      OUTLINE(@"SEL$1")
      OR_EXPAND(@"SEL$1" (1) (2))
      OUTLINE_LEAF(@"SEL$9162BF3C")
      OUTLINE_LEAF(@"SET$2A13AF86")
      OUTLINE_LEAF(@"SET$2A13AF86_1")
      OUTLINE_LEAF(@"SET$2A13AF86_2")
      ALL_ROWS
      OPT_PARAM('_optimizer_use_feedback' 'false')
      DB_VERSION('19.1.0')
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */
Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."N1"="T2"."ID")
   6 - access("T1"."N1"="T2"."N2")
       filter(LNNVL("T1"."N1"="T2"."ID")) 

Workaround

To work around this limit, you can set back the optimizer features enable (OFE) to the previous version (12cR1 or less) where the OR-Expansion was not yet implemented so that your CTAS might be able to use the CONCATENATION operation. Or, you can opt for the following workaround provided by Mauro Pagano(@Mautro)

SQL> explain plan for
   create table t1_bis
   as
   select *
   from
   ( select /*+ no_merge
               or_expand
            */
        t1.*
   from
     t1, t2
   where
     t1.n1 = t2.id
   or
     t1.n1 = t2.n2
     );

Explained.

----------------------------------------------------------------------------
| Id  | Operation                        | Name            | Rows  | Bytes |
----------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT           |                 |     2 |    66 |
|   1 |  LOAD AS SELECT                  | T1_BIS          |       |       |
|   2 |   OPTIMIZER STATISTICS GATHERING |                 |     2 |    66 |
|   3 |    VIEW                          |                 |     2 |    66 |
|   4 |     VIEW                         | VW_ORE_4C1C5B2C |     2 |    66 |
|   5 |      UNION-ALL                   |                 |       |       |
|*  6 |       HASH JOIN                  |                 |     1 |    37 |
|   7 |        TABLE ACCESS FULL         | T1              |     1 |    33 |
|   8 |        TABLE ACCESS FULL         | T2              |  1000 |  4000 |
|*  9 |       HASH JOIN                  |                 |     1 |    41 |
|  10 |        TABLE ACCESS FULL         | T1              |     1 |    33 |
|  11 |        TABLE ACCESS FULL         | T2              |  1000 |  8000 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   6 - access("T1"."N1"="T2"."ID")
   9 - access("T1"."N1"="T2"."N2")
       filter(LNNVL("T1"."N1"="T2"."ID"))

It consists of wrapping the select part into an inline view and pre-empting this inline view from being merged with the DDL part of the CTAS. As such the SELECT part becomes a valid candidate for the OR-Expansion transformation.

Oddity

When I tried to put the or_expand hint at the create table level here’s what I got


SQL> select banner_full from v$version;

BANNER_FULL
--------------------------------------------------------------------------------
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.8.0.0.0

SQL> explain plan for
create /*+ or_expand */ table t1_bis
as
  select
     t1.*
from
  t1, t2
where
  t1.n1 = t2.id
or
  t1.n1 = t2.n2;  2    3    4    5    6    7    8    9   10   11
explain plan for
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 8658
Session ID: 198 Serial number: 5465

With the following error in the alert log

ORA-07445: exception encountered: core dump [qkshtQBStore()+219] [SIGSEGV] [ADDR:0x258] [PC:0x2628B8B] [Address not mapped to object] []

November 15, 2020

Null-Aware anti-join, parsing and _optimizer_squ_bottomup

Filed under: Oracle — hourim @ 10:29 am

I originally wrote about Null-Aware anti join in 2017 just as something I’d keep seeing several times at client sites. But it turned out that it was rather to explain Null-Accepting Semi-Join. For the sake of clarity let me show, below, two execution plans where these two CBO transformations are in action respectively:

----------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |
----------------------------------------------------
|   0 | SELECT STATEMENT    |      |       |       |
|   1 |  SORT AGGREGATE     |      |     1 |     6 |
|*  2 |   HASH JOIN ANTI NA |      |     1 |     6 | -- Null-Aware ANTI Join
|   3 |    TABLE ACCESS FULL| T1   |    10 |    30 |
|   4 |    TABLE ACCESS FULL| T2   |    10 |    30 |
----------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   2 – access("T1"."N1"="N1")
----------------------------------------------------
----------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |
----------------------------------------------------
|   0 | SELECT STATEMENT    |      |       |       |
|   1 |  SORT AGGREGATE     |      |     1 |     6 |
|*  2 |   HASH JOIN SEMI NA |      |     7 |    42 | --Null Accepting SEMI-join
|   3 |    TABLE ACCESS FULL| T1   |    10 |    30 |
|   4 |    TABLE ACCESS FULL| T2   |    10 |    30 |
----------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T2"."N1"="T1"."N1") 

A simple way to don’t get confused is to remember this:

  • Null-Aware Anti Join is for rows that don’t join (ANTI)
  • Null-Accepting Semi join is for rows that (SEMI) join.

In this blog post, I will try to show how fixing a really weird real life parsing issue by changing the value of the _optimizer_squ_bottomup parameter has created a performance issue in another query. I will demonstrate that this bad side effect occurred because the Null-Aware anti-join transformation is by-passed by Oracle under a non-default value of this hidden parameter

SQL> select banner_full from gv$version;

BANNER_FULL
----------------------------------------------------------------------
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

create table t1 as select rownum n1, trunc((rownum -1/5)) n2 from dual connect by level <= 1e6; 
create table t2 as select rownum n1, trunc((rownum -1/3)) n2 from dual connect by level <= 1e6;
 
update t1 set n1 = null where n1<=1e3;
 
exec dbms_stats.gather_table_stats(user, 't1');
exec dbms_stats.gather_table_stats(user, 't2');

SELECT 
   count(1)
FROM t1
   WHERE t1.n1 NOT IN (select n1 from t2);
   
Elapsed: 00:00:00.31 -----> only 31 ms

Execution Plan
----------------------------------------------------------
Plan hash value: 1650861019
------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |TempSpc|
------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |     1 |    10 |       |
|   1 |  SORT AGGREGATE     |      |     1 |    10 |       |
|*  2 |   HASH JOIN ANTI NA |      |  1000K|  9765K|    16M|
|   3 |    TABLE ACCESS FULL| T1   |  1000K|  4882K|       |
|   4 |    TABLE ACCESS FULL| T2   |  1000K|  4882K|       |
------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("T1"."N1"="N1")

Statistics
------------------------------------------
          0  recursive calls
          0  db block gets
       4154  consistent gets
          0  physical reads
          0  redo size
        549  bytes sent via SQL*Net to client
        430  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

As you can see, Oracle has used a Null-Aware anti join (HASH JOIN ANTI NA) and executed the query in a couple of milliseconds with 4k of consistent gets

But, if you come to encounter a parsing issue (mainly in 12cR2) and you workaround it by changing the following parameter:

SQL> alter session set "_optimizer_squ_bottomup" = false;

Then see how your original query will behave:

SELECT count(1) FROM t1 WHERE t1.n1 NOT IN (select n1 from t2)

Global Information
------------------------------
 Status              :  EXECUTING             
 Instance ID         :  1                     
 Session             :  C##MHOURI (274:59197) 
 SQL ID              :  7009s3j53bdgv         
 SQL Execution ID    :  16777226              
 Execution Started   :  11/14/2020 14:43:59   
 First Refresh Time  :  11/14/2020 14:44:05   
 Last Refresh Time   :  11/14/2020 15:00:43   
 Duration            :  1004s     ------> still in execution phase after 1004s            
 Module/Action       :  SQL*Plus/-            
 Service             :  orcl19c               
 Program             :  sqlplus.exe           

Global Stats
=========================================
| Elapsed |   Cpu   |  Other   | Buffer |
| Time(s) | Time(s) | Waits(s) |  Gets  |
=========================================
|    1003 |     982 |       21 |    50M |
========================================= 
SQL Plan Monitoring Details (Plan Hash Value=59119136)

=========================================================================================
| Id   |       Operation       | Name |  Rows   |   Time    | Start  | Execs |   Rows   |
|      |                       |      | (Estim) | Active(s) | Active |       | (Actual) |
=========================================================================================
|    0 | SELECT STATEMENT      |      |         |           |        |     1 |          |
|    1 |   SORT AGGREGATE      |      |       1 |           |        |     1 |          |
| -> 2 |    FILTER             |      |         |       999 |     +6 |     1 |        0 |
| -> 3 |     TABLE ACCESS FULL | T1   |      1M |       999 |     +6 |     1 |     219K |
| -> 4 |     TABLE ACCESS FULL | T2   |       1 |      1004 |     +1 |  218K |     218K |
=========================================================================================

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter( IS NULL)
   4 - filter(LNNVL("N1"<>:B1))

It used a dramatic execution plan.

And this is a perfect representation of what can happen in a real-life production system. The FILTER operation acts as a NESTED LOOPS operation would have done: the inner row source (operation id n°4) has been started 218K times. Moreover, the performance pain would have been even worse if your real-life query runs under an Exadata machine where the LNNVL function (used to overcome the always threatening null) will impeach smart scan from kicking in.

Here’s below a snippet of the 10053 when this transformation is used:

CBQT: Validity checks passed for 51rdr10tfdcb1.

Subquery removal for query block SEL$2 (#2)
RSW: Not valid for subquery removal SEL$2 (#2)
Subquery unchanged.
SU:   Transform ALL subquery to a null-aware antijoin.
SJC: Considering set-join conversion in query block SEL$5DA710D3 (#1)

AP: Adaptive joins bypassed for query block SEL$5DA710D3 due to null-aware anti-join

***************
Now joining: T2[T2]#1
***************
NL Join

Best:: JoinMethod: HashNullAwareAnti
       Cost: 2809.074624  Degree: 1  Resp: 2809.074624  Card: 1000000.000000 Bytes:

And here’s a snippet of the same trace file when this transformation is not used:

*************************************
  PARAMETERS WITH ALTERED VALUES
  ******************************
Compilation Environment Dump
_pga_max_size                       = 333600 KB
_optimizer_squ_bottomup             = false
_optimizer_use_feedback             = false
Bug Fix Control Environment 

Considering Query Transformations on query block SEL$1 (#0)
**************************
Query transformations (QT)
**************************
CBQT bypassed for query block SEL$1 (#0): Disabled by parameter.
CBQT: Validity checks failed for cw6z7z8cajdbq.

Can we infer that that changing the default value of _optimizer_squ_bottomup will cancel the Cost-Based Query Transformation?

Summary

Changing the _optimizer_squ_bottomup value at a global level will fix several parsing issues occurring because of a not yet published bug Bug 26661798: HIGH PARSE TIME FOR COMPLEX QUERY 12C but, bear in mind that it might also introduce a performance deterioration for queries using NOT IN predicate where the Null-Aware anti join transformation is invalidated by the non-default value of this hidden parameter .

October 24, 2020

Function based indexes and cursor sharing

Filed under: Oracle — hourim @ 1:20 pm

To be selected by Oracle, a Function Based Index (FBI) must have a definition that matches the predicate expression of the query. For example, the following FBI index is used because its definition matches perfectly the predicate part:

SQL> create index fbi_ind on t1 (case when mod(id,100) =0 then 1 else -1 end);

Index created.

SQL> select
         count(1) 
     from t1 
     where 
          (case when mod(id,100) =0 then 1 else -1 end) = 1;

  COUNT(1)
----------
     11010

SQL> @xpsimp

SQL_ID  b637sp3wfptm4, child number 0
-------------------------------------
-------------------------------------------------
| Id  | Operation             | Name    | Rows  |
-------------------------------------------------
|   0 | SELECT STATEMENT      |         |       |
|   1 |  SORT AGGREGATE       |         |     1 |
|*  2 |   INDEX FAST FULL SCAN| FBI_IND | 11010 |
-------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("T1"."SYS_NC00005$"=1)

However, if I slightly change the predicate part of the above query, then Oracle will cease to use that index

SQL> select count(1) 
     from t1 
        where (case when mod(id,100) =0 then 9 else -1 end) = 1;

  COUNT(1)
----------
         0
-------------------------------------------
| Id  | Operation          | Name | Rows  |
-------------------------------------------
|   0 | SELECT STATEMENT   |      |       |
|   1 |  SORT AGGREGATE    |      |     1 |
|*  2 |   TABLE ACCESS FULL| T1   | 11010 |
-------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(CASE MOD("ID",100) WHEN 0 THEN 9 ELSE (-1) END =1)

Again I have to create a matching index if I want Oracle to use an index access for the above query

SQL> create index fbi_ind_2 on t1 (case when mod(id,100) =0 then 9 else -1 end);

SQL> select count(1) 
     from t1 
        where (case when mod(id,100) =0 then 9 else -1 end) = 1;

  COUNT(1)
----------
         0
---------------------------------------------------
| Id  | Operation             | Name      | Rows  |
---------------------------------------------------
|   0 | SELECT STATEMENT      |           |       |
|   1 |  SORT AGGREGATE       |           |     1 |
|*  2 |   INDEX FAST FULL SCAN| FBI_IND_2 | 11010 |
---------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("T1"."SYS_NC00006$"=1)

Let’s now force the cursor sharing parameter value and repeat the experiment

SQL> alter session set cursor_sharing=force;

select
        count(1)
    from t1
    where
         (case when mod(id,100) =0 then 1 else -1 end) = 1;

  COUNT(1)
----------
     11010

SQL_ID  31tfkvvc1k4hf, child number 0
-------------------------------------
-------------------------------------------
| Id  | Operation          | Name | Rows  |
-------------------------------------------
|   0 | SELECT STATEMENT   |      |       |
|   1 |  SORT AGGREGATE    |      |     1 |
|*  2 |   TABLE ACCESS FULL| T1   | 11010 |
-------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(CASE MOD("ID",:SYS_B_1) WHEN :SYS_B_2 THEN :SYS_B_3 ELSE
              (-:SYS_B_4) END =:SYS_B_5)

The FBI index is not anymore used. Do you know why?

This is simply, because, as stated above, the definition of the FBI_IND function-based index is not anymore matching the definition of the predicate part n°2 because the forced cursor sharing value has transformed the literal values in the predicate part to a system bind variable values: SYS_B_1, SYS_B_2 etc..

This is exactly what occurred to my client using third party software. To remedy this situation I had to create a SQL patch to force an exact value of the cursor sharing parameter but only for this particular query:

SQL> DECLARE
  2     patch_name varchar2(100);
  3  BEGIN
  4     patch_name := sys.dbms_sqldiag.create_sql_patch(
  5                  sql_id    => '31tfkvvc1k4hf',
  6                  hint_text => 'cursor_sharing_exact',
  7                  name      => '31tfkvvc1k4hf_cursor_sharing_exact');
  8  END;
  9  /

PL/SQL procedure successfully completed.
select
        count(1)
    from t1
    where
         (case when mod(id,100) =0 then 1 else -1 end) = 1;

  COUNT(1)
----------
     11010

SQL> @xpsimp

SQL_ID  ft9dsb7qj8yfr, child number 0
-------------------------------------
select         count(1)     from t1     where          (case when
mod(id,100) =0 then 1 else -1 end) = 1

-------------------------------------------------
| Id  | Operation             | Name    | Rows  |
-------------------------------------------------
|   0 | SELECT STATEMENT      |         |       |
|   1 |  SORT AGGREGATE       |         |     1 |
|*  2 |   INDEX FAST FULL SCAN| FBI_IND | 11010 |
-------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("T1"."SYS_NC00005$"=1)

Note
-----
   - SQL patch "31tfkvvc1k4hf_cursor_sharing_exact" used for this statement

I know that you’ve already pointed out that the FBI index is now used by Oracle, but did you point out that the SQL_ID has changed (ft9dsb7qj8yfr)?

This is the effect of the applied SQL patch which imposes the usage of the exact value of the cursor sharing parameter. So, before hurrying up and trying to make your critical query using the FBI index, you have to measure whether using literal values will not turn your execution pain into a parsing penalty.

While I was troubleshooting a similar issue for the same third party software something attracted my attention. I created a function based index using a SUBSTR function and was in my way of creating a similar patch to force the exact value of the cursor sharing when I remarked that my new FBI index is already used

Here’s a model of my client case:

create table t1
    as
        select
            rownum               n1,
            'xy' || rownum         vc1,
            mod(rownum, 10)       n2
        from
            dual
        connect by
            level <= 1e4;

create index idx_t1 on t1(substr(vc1,3,1));

alter session set cursor_sharing=force;

select * from t1 where substr(vc1,3,1)='5';

SQL> @xpout

SQL_ID  djypd1v8qjawh, child number 0
-------------------------------------
select * from t1 where substr(vc1,:"SYS_B_0",:"SYS_B_1")=:"SYS_B_2"

--------------------------------------------------------------
| Id  | Operation                           | Name   | Rows  |
--------------------------------------------------------------
|   0 | SELECT STATEMENT                    |        |       |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1     |   100 |
|*  2 |   INDEX RANGE SCAN                  | IDX_T1 |    40 |
-------------------------------------------------------------
Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      OPT_PARAM('_optimizer_use_feedback' 'false')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      INDEX_RS_ASC(@"SEL$1" "T1"@"SEL$1" "IDX_T1")
      BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$1" "T1"@"SEL$1")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("T1"."SYS_NC00004$"=:SYS_B_2)

My client was upgrading from a 12.2 to a 19c and I was practically sure that in the source database release the FBI index was not used under force cursor sharing. This is why I altered the optimizer_features_enabled (OFE) parameter in the destination database to the previous version and launched again the same query


SQL> alter session set optimizer_features_enable='12.2.0.1';

Session altered.

SQL> select * from t1 where substr(vc1,3,1)='5';

SQL_ID  djypd1v8qjawh, child number 1
-------------------------------------
select * from t1 where substr(vc1,:"SYS_B_0",:"SYS_B_1")=:"SYS_B_2"

Plan hash value: 3617692013
------------------------------------------
| Id  | Operation         | Name | Rows  |
------------------------------------------
|   0 | SELECT STATEMENT  |      |       |
|*  1 |  TABLE ACCESS FULL| T1   |   100 |
------------------------------------------

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      DB_VERSION('19.1.0')
      OPT_PARAM('_optimizer_use_feedback' 'false')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "T1"@"SEL$1")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(SUBSTR("VC1",:SYS_B_0,:SYS_B_1)=:SYS_B_2)

I have then checked different values of the OFE until I realized that this change occurred as from 18.1


SQL> alter session set optimizer_features_enable='18.1.0';

SQL> select * from t1 where substr(vc1,3,1)='5';

Plan hash value: 3491035275
--------------------------------------------------------------
| Id  | Operation                           | Name   | Rows  |
--------------------------------------------------------------
|   0 | SELECT STATEMENT                    |        |       |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1     |   100 |
|*  2 |   INDEX RANGE SCAN                  | IDX_T1 |    40 |
--------------------------------------------------------------

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('18.1.0')
      DB_VERSION('19.1.0')
      OPT_PARAM('_optimizer_use_feedback' 'false')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      INDEX_RS_ASC(@"SEL$1" "T1"@"SEL$1" "IDX_T1")
      BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$1" "T1"@"SEL$1")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("T1"."SYS_NC00004$"=:SYS_B_2)

Thanks to Ivica Arsov (@IvicaArsov) I know now that this was addressed by Oracle in the following bug

Patch 25575369: FUNCTION BASED INDEX BASED WITH SUBSTR NOT USED WHEN CURSOR_SHARING=FORCE

It’s a good improvement done by Oracle to allow FBI indexes that contain a SUBSTR function in their definition, to be used by Oracle under forced cursor sharing value.

And it looks also that the LPAD function is also not anymore a FBI pre-empting reason(under forced cursor sharing) as the following example proves:


create index idx2_t1 on t1(lpad(vc1,2,'z'));

select * from t1 where lpad(vc1,2,'z') = 'xy';

SQL_ID  g9zvaj3sq5ptm, child number 0
-------------------------------------
select * from t1 where lpad(vc1,:"SYS_B_0",:"SYS_B_1") = :"SYS_B_2"

Plan hash value: 2927120560
---------------------------------------------------------------
| Id  | Operation                           | Name    | Rows  |
---------------------------------------------------------------
|   0 | SELECT STATEMENT                    |         |       |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1      |   100 |
|*  2 |   INDEX RANGE SCAN                  | IDX2_T1 |    40 |
---------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('18.1.0')
      DB_VERSION('19.1.0')
      OPT_PARAM('_optimizer_use_feedback' 'false')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      INDEX_RS_ASC(@"SEL$1" "T1"@"SEL$1" "IDX2_T1")
      BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$1" "T1"@"SEL$1")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."SYS_NC00005$"=:SYS_B_2)

Bottom Line

In your Oracle learning process, I think it is always a good idea to unlearn some evidence; particularly when you go from an older to a newer release. In this article, I demonstrated that, as from 18c and on, certain function-based indexes can be used by Oracle under forced cursor sharing value.

September 30, 2020

DDL Optimization and VPD

Filed under: Oracle — hourim @ 4:05 pm

I have mentioned in the past two situations where DDL optimization is not possible:

In this article, I will present a new restriction that has been signaled to me by Moustafa Ahmed @moustapha_dba. He agreed of course that I highjack his finding and write about it so that I will have a series of DDL optimization impeachment articles gathered at a single place.

Simply put, the DDL optimization is impossible on a table on which a VPD policy is applied.

SQL> select banner_full from v$version;

BANNER_FULL
------------------------------------------------------------------------
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> create table t1 as select rownum n1, lpad('x',5) v1
         from dual connect by level <==5; 

SQL> alter table t1 add c1 number default 42 not null;

SQL> select count(1) from t1 where c1=42;

  COUNT(1)
----------
         5
		 
SQL> select * from table(dbms_xplan.display_cursor);

SQL_ID  a4v8hg2qxzp1g, child number 0
-------------------------------------
select count(1) from t1 where c1=42

Plan hash value: 3724264953
---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |     3 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |     5 |    15 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(NVL("C1",42)=42) –- spot the presence of the NVL

The presence of the NVL function in the predicate part is enough to confirm that the DDL optimization kicked in during the above alter table.

Let’s now create a VPD policy, apply it to the t1 table and add a new column

-- create policy function
create or replace function
    f_t1_policy(piv_schema in varchar2
               ,piv_object in varchar2)
return varchar2
is
  lv_return_value varchar2(4000);
begin
  if sys_context('USERENV','LANG') = 'US'
  then
    lv_return_value := '1=1';
  else
    lv_return_value := '1=0';
  end if;
   return lv_return_value;
end f_t1_policy;
/
-- assign this policy to t1 table
begin
 dbms_rls.add_policy
   (object_schema    => user,
    object_name      => 'T1',
       policy_name      => 'F_T1_POLICY',
       function_schema    => user,
       policy_function  => 'F_T1_POLICY',
       statement_types  => 'SELECT'
    );
end;
/

SQL> alter table t1 add c2 number default 43 not null;

SQL> select count(1) from t1 where c2=43;

  COUNT(1)
----------
         0

SQL> select * from table(dbms_xplan.display_cursor);

SQL_ID  6vk08skyq9v43, child number 0
-------------------------------------
select count(1) from t1 where c2=43

Plan hash value: 4294799605
----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |       |       |     1 (100)|          |
|   1 |  SORT AGGREGATE     |      |     1 |     3 |            |          |
|*  2 |   FILTER            |      |       |       |            |          |
|*  3 |    TABLE ACCESS FULL| T1   |     5 |    15 |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(NULL IS NOT NULL)
   3 - filter("C2"=43)

Two things can be spotted out from the predicate part (always keep an eye on the predicate part😊):

  • filter(NULL IS NOT NULL): indicates that a policy has been applied to the select
  • filter(“C2″=43): the absence of the NVL function indicates that Oracle didn’t use the DDL Optimization feature when it has been asked to add the c2 column

Despite there is a mix of two columns, c1 added using DDL optimization and c2 added without any optimization, Oracle is, nevertheless, able to manage this mix as shown via the below query where both columns are involved:

SQL> select count(1) from t1 where c1=42 and c2=43;

  COUNT(1)
----------
         0

SQL> select * from table(dbms_xplan.display_cursor);


SQL_ID  3n5t3vpw2an0k, child number 0
-------------------------------------
select count(1) from t1 where c1=42 and c2=43

SQL_ID  3n5t3vpw2an0k, child number 0
-------------------------------------
select count(1) from t1 where c1=42 and c2=43

Plan hash value: 4294799605
----------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |
----------------------------------------------------
|   0 | SELECT STATEMENT    |      |       |       |
|   1 |  SORT AGGREGATE     |      |     1 |     3 |
|*  2 |   FILTER            |      |       |       |
|*  3 |    TABLE ACCESS FULL| T1   |     5 |    15 |
----------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(NULL IS NOT NULL)
   3 - filter((NVL("C1",42)=42 AND "C2"=43))

Ahmed Moustafa mentioned that his alter table add column performance pain is not present when he alter his user table using the SYS account. This is correct because the Oracle documentation says that SYS is free of any security policy. This is, indeed, backed up, via the following example:

SQL> show user
USER is "SYS"

SQL> alter table c##mhouri.t1 add c3 number default 44 not null;

Table altered.


SQL> select count(1) from c##mhouri.t1 where c3=44;

  COUNT(1)
----------
         5

SQL> select * from table(dbms_xplan.display_cursor);


SQL_ID  457vn8dqdu25t, child number 0
-------------------------------------
select count(1) from c##mhouri.t1 where c3=44

Plan hash value: 3724264953

---------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |
---------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |
|   1 |  SORT AGGREGATE    |      |     1 |     3 |
|*  2 |   TABLE ACCESS FULL| T1   |     5 |    15 |
---------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(NVL("C3",44)=44)

References

September 21, 2020

OR-Expansion: bypassed – query block has subquery correlated to non-parent

Filed under: Oracle — hourim @ 6:04 pm

It looks like Oracle 19c release comes up with a new heuristic impeachment for the OR-expansion transformation:

  ORE: bypassed - query block has subquery correlated to non-parent.

Let’s see this in action via a reproducible model based on a real-life query which completes instantaneously in 12cR2 but needs 2 minutes when ran in 19c

1. Reproducible model

First, the three tables approaching as much as possible the real-life case.

SQL> select banner_full from v$version;

BANNER_FULL
------------------------------------------------------------------------
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0 
 create table t1 as 
    select rownum id
          , trunc((rownum -1/3)) n2
          , trunc(rownum -1/5) n3
          , mod(rownum,10) n4
          , lpad('x',10) vc 
    from dual 
connect by level <=1e3;

create table t2 as 
     select rownum id
          , trunc((rownum -1/6)) n2
          , trunc(rownum -1/8) n3
          , mod(rownum,5) n4
          , lpad('z',10) vc 
      from dual 
      connect by level <=1e3;
      
create table t3 as 
       select rownum id
            , trunc(sysdate + rownum -1/3) d1
            , trunc(dbms_random.value(1,1000)) r 
        from dual 
        connect by level <=20;

And then the query

explain plan for
  select /*+ or_expand */
      t1.id
	 ,t1.n2
	 ,t2.vc
  from
     t1, t2
  where
     t1.id = t2.id
  and trim(t1.vc) = 'x'
  or t1.n3 = (select
                  t3.r
              from t3
              where d1 = trunc(sysdate +1)
              )
  and t1.n4 = (select
                   max(n4)
               from t1 e
               where e.n4 = t1.n3
               and e.n2 = (select 
                               max(f.n2)
                           from t1 f
                           where f.n3 = t1.n3 -- this is the root cause of the or expansion bypass
						   )
			   );

As you can see this query is already in a DNF (Disjunctive Normal Form) since it has two distinct ORed conjuncts as the following shows:

  select 
      {list of columns}
  from
     t1, t2
  where
     t1.id = t2.id
  and 
    (trim(t1.vc) = 'x' –- conjunct n°1
  or 
     t1.n3 = (select   –- conjunct n°2
                  t3.r
              from t3
              where d1 = trunc(sysdate +1)
              )    
     )
  and t1.n4 = (select
                   max(n4)
               from t1 e
               where e.n4 = t1.n3
               and e.n2 = (select 
                               max(f.n2)
                           from t1 f
                           where f.n3 = t1.n3)
			   );

For such kind of queries, Oracle doesn’t need to have a DNF beforehand transformation prior to envisage the OR- expansion.

Now, here’re below, respectively, the 12cR2 and 19c execution plan of the above query

-------------------------------------------------------------------------------------------
| Id  | Operation               | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |                 |    11 |   363 |  3024   (1)| 00:00:01 |
|   1 |  VIEW                   | VW_ORE_F79C84EE |    11 |   363 |  3024   (1)| 00:00:01 |
|   2 |   UNION-ALL             |                 |       |       |            |          |
|*  3 |    FILTER               |                 |       |       |            |          |
|   4 |     MERGE JOIN CARTESIAN|                 |  1000 | 26000 |     6   (0)| 00:00:01 |
|*  5 |      TABLE ACCESS FULL  | T1              |     1 |    15 |     3   (0)| 00:00:01 |
|*  6 |       TABLE ACCESS FULL | T3              |     1 |    12 |     2   (0)| 00:00:01 |
|   7 |      BUFFER SORT        |                 |  1000 | 11000 |     3   (0)| 00:00:01 |
|   8 |       TABLE ACCESS FULL | T2              |  1000 | 11000 |     3   (0)| 00:00:01 |
|   9 |     SORT AGGREGATE      |                 |     1 |     7 |            |          |
|* 10 |      FILTER             |                 |       |       |            |          |
|* 11 |       TABLE ACCESS FULL | T1              |   100 |   700 |     3   (0)| 00:00:01 |
|  12 |       SORT AGGREGATE    |                 |     1 |     8 |            |          |
|* 13 |        TABLE ACCESS FULL| T1              |     1 |     8 |     3   (0)| 00:00:01 |
|* 14 |    FILTER               |                 |       |       |            |          |
|* 15 |     HASH JOIN           |                 |    10 |   410 |     6   (0)| 00:00:01 |
|* 16 |      TABLE ACCESS FULL  | T1              |    10 |   260 |     3   (0)| 00:00:01 |
|  17 |      TABLE ACCESS FULL  | T2              |  1000 | 15000 |     3   (0)| 00:00:01 |
|* 18 |     TABLE ACCESS FULL   | T3              |     1 |    12 |     2   (0)| 00:00:01 |
|  19 |     SORT AGGREGATE      |                 |     1 |     7 |            |          |
|* 20 |      FILTER             |                 |       |       |            |          |
|* 21 |       TABLE ACCESS FULL | T1              |   100 |   700 |     3   (0)| 00:00:01 |
|  22 |       SORT AGGREGATE    |                 |     1 |     8 |            |          |
|* 23 |        TABLE ACCESS FULL| T1              |     1 |     8 |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("T1"."N4"= (SELECT MAX("N4") FROM "T1" "E" WHERE "E"."N2"= (SELECT
              MAX("F"."N2") FROM "T1" "F" WHERE "F"."N3"=:B1) AND "E"."N4"=:B2))
   5 - filter("T1"."N3"= (SELECT "T3"."R" FROM "T3" "T3" WHERE
              "D1"=TRUNC(SYSDATE@!+1)))
   6 - filter("D1"=TRUNC(SYSDATE@!+1))
  10 - filter("E"."N2"= (SELECT MAX("F"."N2") FROM "T1" "F" WHERE "F"."N3"=:B1))
  11 - filter("E"."N4"=:B1)
  13 - filter("F"."N3"=:B1)
  14 - filter(LNNVL("T1"."N3"= (SELECT "T3"."R" FROM "T3" "T3" WHERE
              "D1"=TRUNC(SYSDATE@!+1))) OR LNNVL("T1"."N4"= (SELECT MAX("N4") FROM "T1" "E"
              WHERE "E"."N2"= (SELECT MAX("F"."N2") FROM "T1" "F" WHERE "F"."N3"=:B1) AND
              "E"."N4"=:B2)))
  15 - access("T1"."ID"="T2"."ID")
  16 - filter(TRIM("T1"."VC")='x')
  18 - filter("D1"=TRUNC(SYSDATE@!+1))
  20 - filter("E"."N2"= (SELECT MAX("F"."N2") FROM "T1" "F" WHERE "F"."N3"=:B1))
  21 - filter("E"."N4"=:B1)
  23 - filter("F"."N3"=:B1)
------------------------------------------------------------------------------
| Id  | Operation             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |      |   110 |  4510 |  2201   (2)| 00:00:01 |
|*  1 |  FILTER               |      |       |       |            |          |
|   2 |   MERGE JOIN CARTESIAN|      |  1000K|    39M|  2201   (2)| 00:00:01 |
|   3 |    TABLE ACCESS FULL  | T1   |  1000 | 26000 |     4   (0)| 00:00:01 |
|   4 |    BUFFER SORT        |      |  1000 | 15000 |  2197   (2)| 00:00:01 |
|   5 |     TABLE ACCESS FULL | T2   |  1000 | 15000 |     2   (0)| 00:00:01 |
|*  6 |   TABLE ACCESS FULL   | T3   |     1 |    12 |     3   (0)| 00:00:01 |
|   7 |   SORT AGGREGATE      |      |     1 |     7 |            |          |
|*  8 |    FILTER             |      |       |       |            |          |
|*  9 |     TABLE ACCESS FULL | T1   |   100 |   700 |     4   (0)| 00:00:01 |
|  10 |     SORT AGGREGATE    |      |     1 |     8 |            |          |
|* 11 |      TABLE ACCESS FULL| T1   |     1 |     8 |     4   (0)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("T1"."ID"="T2"."ID" AND TRIM("T1"."VC")='x' OR "T1"."N3"=
              (SELECT "T3"."R" FROM "T3" "T3" WHERE "D1"=TRUNC(SYSDATE@!+1)) AND
              "T1"."N4"= (SELECT MAX("N4") FROM "T1" "E" WHERE "E"."N2"= (SELECT
              MAX("F"."N2") FROM "T1" "F" WHERE "F"."N3"=:B1) AND "E"."N4"=:B2))
   6 - filter("D1"=TRUNC(SYSDATE@!+1))
   8 - filter("E"."N2"= (SELECT MAX("F"."N2") FROM "T1" "F" WHERE
              "F"."N3"=:B1))
   9 - filter("E"."N4"=:B1)
  11 - filter("F"."N3"=:B1)

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------

   1 -  SEL$1
         U -  or_expand 

Notice how the OR-expansion has been used in 12cR2 and ignored in 19c.

The impeachment reason recorded in the 10053 trace file is

 ORE: bypassed - query block has subquery correlated to non-parent. 

It appears 6 times in the trace file


ORE: Predicate chain after QB validity check - SEL$4
"F"."N3"="T1"."N3"
ORE: bypassed - query block has subquery correlated to non-parent

ORE: Predicate chain after QB validity check - SEL$3
"E"."N4"="T1"."N3" AND "E"."N2"= (SELECT MAX("F"."N2") FROM "T1" "F")
ORE: bypassed - query block has subquery correlated to non-parent.

ORE: Predicate chain after QB validity check - SEL$1
"T1"."ID"="T2"."ID" AND TRIM("T1"."VC")='x' OR "T1"."N3"= (SELECT "T3"."R" FROM "T3" "T3") 
AND "T1"."N4"= (SELECT MAX("E"."N4") FROM "T1" "E")
ORE: bypassed - query block has subquery correlated to non-parent.

ORE: Predicate chain after QB validity check - SEL$4
"F"."N3"=:B1
ORE: bypassed - query block has subquery correlated to non-parent.


ORE: Predicate chain after QB validity check - SEL$3
"E"."N4"=:B1 AND "E"."N2"= (SELECT MAX("F"."N2") FROM "T1" "F")
ORE: bypassed - query block has subquery correlated to non-parent.

ORE: Predicate chain after QB validity check - SEL$1
"T1"."ID"="T2"."ID" AND TRIM("T1"."VC")='x' OR "T1"."N3"= (SELECT "T3"."R" FROM "T3" "T3") 
AND "T1"."N4"= (SELECT MAX("E"."N4") FROM "T1" "E")
ORE: bypassed - query block has subquery correlated to non-parent.

But it is fairly likely that the impeachment reason is due to this


ORE: Predicate chain after QB validity check - SEL$4
"F"."N3"="T1"."N3"
ORE: bypassed - query block has subquery correlated to non-parent

Because if I comment this part of the query then the OR expansion will be used


select /*+ or_expand */
      t1.id
     ,t1.n2
     ,t2.vc
  from
     t1, t2
  where
     t1.id = t2.id
  and trim(t1.vc) = 'x'
  or t1.n3 = (select
                  t3.r
              from t3
              where d1 = trunc(sysdate +1)
              )
  and t1.n4 = (select
                   max(n4)
               from t1 e
               where e.n4 = t1.n3
               and e.n2 = (select
                               max(f.n2)
                           from t1 f
                           --where f.n3 = t1.n3
                           )
               ); 

---------------------------------------------------------------------------------------------
| Id  | Operation                 | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                 |       |       |    31 (100)|          |
|   1 |  VIEW                     | VW_ORE_F79C84EE |  1010 | 33330 |    31   (4)| 00:00:01 |
|   2 |   UNION-ALL               |                 |       |       |            |          |
|   3 |    MERGE JOIN CARTESIAN   |                 |  1000 | 52000 |    17   (6)| 00:00:01 |
|*  4 |     HASH JOIN             |                 |     1 |    41 |    13   (8)| 00:00:01 |
|   5 |      VIEW                 | VW_SQ_1         |     1 |    26 |     9  (12)| 00:00:01 |
|   6 |       HASH GROUP BY       |                 |     1 |     7 |     9  (12)| 00:00:01 |
|*  7 |        TABLE ACCESS FULL  | T1              |     1 |     7 |     4   (0)| 00:00:01 |
|   8 |         SORT AGGREGATE    |                 |     1 |     4 |            |          |
|   9 |          TABLE ACCESS FULL| T1              |  1000 |  4000 |     4   (0)| 00:00:01 |
|* 10 |      TABLE ACCESS FULL    | T1              |     1 |    15 |     4   (0)| 00:00:01 |
|* 11 |       TABLE ACCESS FULL   | T3              |     1 |    12 |     3   (0)| 00:00:01 |
|  12 |     BUFFER SORT           |                 |  1000 | 11000 |    13   (8)| 00:00:01 |
|  13 |      TABLE ACCESS FULL    | T2              |  1000 | 11000 |     4   (0)| 00:00:01 |
|* 14 |    FILTER                 |                 |       |       |            |          |
|* 15 |     HASH JOIN             |                 |    10 |   410 |     8   (0)| 00:00:01 |
|* 16 |      TABLE ACCESS FULL    | T1              |    10 |   260 |     4   (0)| 00:00:01 |
|  17 |      TABLE ACCESS FULL    | T2              |  1000 | 15000 |     4   (0)| 00:00:01 |
|* 18 |     TABLE ACCESS FULL     | T3              |     1 |    12 |     3   (0)| 00:00:01 |
|  19 |     SORT AGGREGATE        |                 |     1 |     7 |            |          |
|* 20 |      TABLE ACCESS FULL    | T1              |     1 |     7 |     4   (0)| 00:00:01 |
|  21 |       SORT AGGREGATE      |                 |     1 |     4 |            |          |
|  22 |        TABLE ACCESS FULL  | T1              |  1000 |  4000 |     4   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("T1"."N4"="MAX(N4)" AND "ITEM_1"="T1"."N3")
   7 - filter("E"."N2"=)
  10 - filter("T1"."N3"=)
  11 - filter("D1"=TRUNC(SYSDATE@!+1))
  14 - filter((LNNVL("T1"."N3"=) OR LNNVL("T1"."N4"=)))
  15 - access("T1"."ID"="T2"."ID")
  16 - filter(TRIM("T1"."VC")='x')
  18 - filter("D1"=TRUNC(SYSDATE@!+1))
  20 - filter(("E"."N4"=:B1 AND "E"."N2"=))

2. Work-around

In my 19c real life query which was taking 150 seconds because the OR-expansion has been refused, I’ve simply used the /*+ use_concat */ hint to get the 12cR2 execution time i.e few milliseconds:

select /*+ use_concat */
      t1.id
     ,t1.n2
     ,t2.vc
  from
     t1, t2
  where
     t1.id = t2.id
  and trim(t1.vc) = 'x'
  or t1.n3 = (select
                  t3.r
              from t3
              where d1 = trunc(sysdate +1)
              )
  and t1.n4 = (select
                   max(n4)
               from t1 e
               where e.n4 = t1.n3
               and e.n2 = (select
                               max(f.n2)
                           from t1 f
                           where f.n3 = t1.n3
                           )
               );
-------------------------------------------------------------------------------
| Id  | Operation              | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |      |  1010 | 41410 |    30   (0)| 00:00:01 |
|   1 |  CONCATENATION         |      |       |       |            |          |
|*  2 |   FILTER               |      |       |       |            |          |
|   3 |    MERGE JOIN CARTESIAN|      |  1000 | 41000 |     8   (0)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL  | T1   |     1 |    26 |     4   (0)| 00:00:01 |
|*  5 |      TABLE ACCESS FULL | T3   |     1 |    12 |     3   (0)| 00:00:01 |
|   6 |     BUFFER SORT        |      |  1000 | 15000 |     4   (0)| 00:00:01 |
|   7 |      TABLE ACCESS FULL | T2   |  1000 | 15000 |     4   (0)| 00:00:01 |
|   8 |    SORT AGGREGATE      |      |     1 |     7 |            |          |
|*  9 |     FILTER             |      |       |       |            |          |
|* 10 |      TABLE ACCESS FULL | T1   |   100 |   700 |     4   (0)| 00:00:01 |
|  11 |      SORT AGGREGATE    |      |     1 |     8 |            |          |
|* 12 |       TABLE ACCESS FULL| T1   |     1 |     8 |     4   (0)| 00:00:01 |
|* 13 |   FILTER               |      |       |       |            |          |
|* 14 |    HASH JOIN           |      |    10 |   410 |     8   (0)| 00:00:01 |
|* 15 |     TABLE ACCESS FULL  | T1   |    10 |   260 |     4   (0)| 00:00:01 |
|  16 |     TABLE ACCESS FULL  | T2   |  1000 | 15000 |     4   (0)| 00:00:01 |
|* 17 |    TABLE ACCESS FULL   | T3   |     1 |    12 |     3   (0)| 00:00:01 |
|  18 |    SORT AGGREGATE      |      |     1 |     7 |            |          |
|* 19 |     FILTER             |      |       |       |            |          |
|* 20 |      TABLE ACCESS FULL | T1   |   100 |   700 |     4   (0)| 00:00:01 |
|  21 |      SORT AGGREGATE    |      |     1 |     8 |            |          |
|* 22 |       TABLE ACCESS FULL| T1   |     1 |     8 |     4   (0)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("T1"."N4"= (SELECT MAX("N4") FROM "T1" "E" WHERE
              "E"."N2"= (SELECT MAX("F"."N2") FROM "T1" "F" WHERE "F"."N3"=:B1) AND
              "E"."N4"=:B2))
   4 - filter("T1"."N3"= (SELECT "T3"."R" FROM "T3" "T3" WHERE
              "D1"=TRUNC(SYSDATE@!+1)))
   5 - filter("D1"=TRUNC(SYSDATE@!+1))
   9 - filter("E"."N2"= (SELECT MAX("F"."N2") FROM "T1" "F" WHERE
              "F"."N3"=:B1))
  10 - filter("E"."N4"=:B1)
  12 - filter("F"."N3"=:B1)
  13 - filter(LNNVL("T1"."N3"= (SELECT "T3"."R" FROM "T3" "T3" WHERE
              "D1"=TRUNC(SYSDATE@!+1))) OR LNNVL("T1"."N4"= (SELECT MAX("N4") FROM
              "T1" "E" WHERE "E"."N2"= (SELECT MAX("F"."N2") FROM "T1" "F" WHERE
              "F"."N3"=:B1) AND "E"."N4"=:B2)))
  14 - access("T1"."ID"="T2"."ID")
  15 - filter(TRIM("T1"."VC")='x')
  17 - filter("D1"=TRUNC(SYSDATE@!+1))
  19 - filter("E"."N2"= (SELECT MAX("F"."N2") FROM "T1" "F" WHERE
              "F"."N3"=:B1))
  20 - filter("E"."N4"=:B1)
  22 - filter("F"."N3"=:B1)

2. Conclusion

It looks like 19c has introduced a new heuristics or-expansion limit:

ORE: bypassed - query block has subquery correlated to non-parent

So, if you come to encounter it, then you know a couple of details on how to reproduce it when it happens, why it has been implemented, and how to work around it

August 19, 2020

A 2 min tuning case

Filed under: Oracle — hourim @ 1:20 pm

Here’s a practical performance troubleshooting case which took me a couple of minutes to fix but about one hour to write down 😊. It comes via an end user complaint about a load process (insert/select) triggered by Oracle Warehouse Builder(OWB) which was running since the last 3 days. When the on-call DBA handled this request to me, he said that he has already run a SQL tuning advisor and he was still unable to find the root cause of this performance degradation. Before I started the investigation I already knew what was happening in this case. Indeed, almost always, if a query lasts a couple of days without erroring out, then there is a big chance that a NESTED LOOPS is certainly for something in this delay. And, in this case, the CBO would have probably messed up its cardinality estimation by opting for the wrong JOIN method. That is why, based upon the DBA explanation, I have had, already, in my mind that this ETL load process is suffering from a lack of table/index/column statistics

Anyway, below is exactly the steps I followed to fix this issue

-- Get the SQL_ID of the running process
SQL> select sql_id, sql_exec_id from gv$sql_monitor where status = 'EXECUTING';

Hopefully, there was only a single SQL_ID in an EXECUTION phase.

-- Get the corresponding SQL Monitoring report
SQL> SELECT DBMS_SQLTUNE.report_sql_monitor(
  		sql_id       => ' bsgj02zjrfdbj',
  		type         => 'TEXT',
  		report_level => 'ALL') AS report
FROM dual;

Global Information
------------------------------
 Status                                 :  EXECUTING           
 Instance ID                            :  1                   
 Session                                :  XYZDATA (245:14813) 
 SQL ID                                 :  bsgj02zjrfdbj       
 SQL Execution ID                       :  16777227            
 Execution Started                      :  10/31/2019 14:26:21 
 First Refresh Time                     :  10/31/2019 14:26:33 
 Last Refresh Time                      :  11/04/2019 10:56:29 
 Duration                               :  333009s             
 Module/Action                          :  CCC_CCS/EXECUTION   
 Service                                :  MYAPL_1             
 Program                                :  JDBC Thin Client    
 PLSQL Entry Ids (Object/Subprogram)    :  10328167,7          
 PLSQL Current Ids (Object/Subprogram)  :  10328167,7          

Global Stats
======================================================================
| Elapsed |   Cpu   |    IO    | Concurrency | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) |  Waits(s)   |  Gets  | Reqs | Bytes |
======================================================================
|  553389 |  329894 |   223477 |          18 |     4G |   4G | 104TB |
======================================================================

SQL Plan Monitoring Details (Plan Hash Value=2281702856)

================================================================================================
| Id   |               Operation                |      Name       |  Rows   | Execs |   Rows   |
|      |                                        |                 | (Estim) |       | (Actual) |
================================================================================================
|    0 | INSERT STATEMENT                       |                 |         |     1 |          |
|    1 |   LOAD TABLE CONVENTIONAL              |                 |         |     1 |        0 |
|      |                                        |                 |         |       |          |
|    2 |    NESTED LOOPS OUTER                  |                 |      2M |     1 |    19932 |
|    3 |     PARTITION LIST OR                  |                 |      2M |     1 |    19932 |
| -> 4 |      TABLE ACCESS FULL                 | TT_XYZ_BAE      |      2M |     1 |    19932 |
|    5 |     PARTITION RANGE ITERATOR           |                 |       1 | 19932 |    19932 |
|    6 |      TABLE ACCESS BY LOCAL INDEX ROWID | T1_TAB          |       1 | 19932 |    19932 |
|      |                                        |                 |         |       |          |
|      |                                        |                 |         |       |          |
| -> 7 |       BITMAP CONVERSION TO ROWIDS      |                 |         | 19932 |       4G |
| -> 8 |        BITMAP INDEX SINGLE VALUE       | XY99_T1_TAB_BRI |         | 19932 |       1M |
|      |                                        |                 |         |       |          |
================================================================================================

As I’ve expected, there is a NESTED LOOPS operation in the mix(operation Id n°2). In addition, the cardinality estimation of 1 at line operation 6 is almost always an indication of a wrong estimation done by the CBO. Add to this the 4G rows generated when visiting the XY99_T1_TAB_BRI bitmap index and you end up by suspecting a statistic issue with the T1_TAB table.

So, the next immediate step I did was to check the statistics of the T1_TAB table and the XY99_T1_TAB_BRI bitmap index :

SQL> @GetPartTab
Enter value for owner: XYZDATA
Enter value for table_name: T1_TAB

TABLE_NAME   PNAME                PPOS   NUM_ROWS TABLESPACE_NAME  LAST_ANAL
------------ ------------------ ------ ---------- ---------------- -------------------
T1_TAB       P_T1_TAB_200001         1          0 XY99_TBS_DONN    26/10/2019 15:26:55
T1_TAB       P_T1_TAB_201708         2    3934618 XY99_TBS_DONN    26/10/2019 15:21:50
T1_TAB       P_T1_TAB_201709         3    3948128 XY99_TBS_DONN    26/10/2019 14:35:11
T1_TAB       P_T1_TAB_201710         4    4078620 XY99_TBS_DONN    26/10/2019 13:47:20
T1_TAB       P_T1_TAB_201711         5    4093099 XY99_TBS_DONN    26/10/2019 12:52:13
T1_TAB       P_T1_TAB_201712         6    4059354 XY99_TBS_DONN    26/10/2019 11:53:52
T1_TAB       P_T1_TAB_201801         7    4080096 XY99_TBS_DONN    26/10/2019 10:58:33
T1_TAB       P_T1_TAB_201802         8    4096904 XY99_TBS_DONN    26/10/2019 09:58:16
T1_TAB       P_T1_TAB_201803         9    4115548 XY99_TBS_DONN    26/10/2019 08:59:04
T1_TAB       P_T1_TAB_201804        10    4175894 XY99_TBS_DONN    26/10/2019 07:54:15
T1_TAB       P_T1_TAB_201805        11    4187923 XY99_TBS_DONN    26/10/2019 07:05:42
T1_TAB       P_T1_TAB_201806        12    4224300 XY99_TBS_DONN    26/10/2019 06:21:28
T1_TAB       P_T1_TAB_201807        13    4233116 XY99_TBS_DONN    26/10/2019 05:30:46
T1_TAB       P_T1_TAB_201808        14    4242821 XY99_TBS_DONN    26/10/2019 04:42:50
T1_TAB       P_T1_TAB_201809        15    4291935 XY99_TBS_DONN    26/10/2019 03:49:06
T1_TAB       P_T1_TAB_201810        16    2596486 XY99_TBS_DONN    26/10/2019 02:57:36
T1_TAB       P_T1_TAB_201811        17    4305042 XY99_TBS_DONN    26/10/2019 02:08:20
T1_TAB       P_T1_TAB_201812        18    4314987 XY99_TBS_DONN    26/10/2019 01:12:00
T1_TAB       P_T1_TAB_201901        19    4333586 XY99_TBS_DONN    26/10/2019 00:21:52
T1_TAB       P_T1_TAB_201902        20    4428940 XY99_TBS_DONN    25/10/2019 23:22:28
T1_TAB       P_T1_TAB_201903        21    4493339 XY99_TBS_DONN    25/10/2019 22:17:44
T1_TAB       P_T1_TAB_201904        22    4420400 XY99_TBS_DONN    25/10/2019 20:54:34
T1_TAB       P_T1_TAB_201905        23    4408412 XY99_TBS_DONN    25/10/2019 19:43:10
T1_TAB       P_T1_TAB_201906        24    4277786 XY99_TBS_DONN    25/10/2019 18:49:03
T1_TAB       P_T1_TAB_201907        25    4250320 XY99_TBS_DONN    25/10/2019 17:39:26
T1_TAB       P_T1_TAB_201908        26          0 XY99_TBS_DONN    25/10/2019 16:53:29  --> looks suspicious
                                       ----------
Total num_rows                         99591654

SQL> @GetPartInd
Enter value for owner: XYZDATA
Enter value for index_name: XY99_T1_TAB_BRI

INDEX_NAME        PNAME              PPOS   NUM_ROWS TABLESPACE_NAME LAST_ANAL
----------------- ---------------- ------ ---------- --------------- -------------------
XY99_T1_TAB_BRI   P_T1_TAB_200001       1          0 XY99_TBS_INDX   26/10/2019 15:26:56
XY99_T1_TAB_BRI   P_T1_TAB_201708       2         67 XY99_TBS_INDX   26/10/2019 15:22:13
XY99_T1_TAB_BRI   P_T1_TAB_201709       3         64 XY99_TBS_INDX   26/10/2019 14:35:34
XY99_T1_TAB_BRI   P_T1_TAB_201710       4         66 XY99_TBS_INDX   26/10/2019 13:47:40
XY99_T1_TAB_BRI   P_T1_TAB_201711       5         67 XY99_TBS_INDX   26/10/2019 12:52:32
XY99_T1_TAB_BRI   P_T1_TAB_201712       6         66 XY99_TBS_INDX   26/10/2019 11:54:15
XY99_T1_TAB_BRI   P_T1_TAB_201801       7         66 XY99_TBS_INDX   26/10/2019 10:58:57
XY99_T1_TAB_BRI   P_T1_TAB_201802       8         67 XY99_TBS_INDX   26/10/2019 09:58:44
XY99_T1_TAB_BRI   P_T1_TAB_201803       9         67 XY99_TBS_INDX   26/10/2019 08:59:31
XY99_T1_TAB_BRI   P_T1_TAB_201804      10         68 XY99_TBS_INDX   26/10/2019 07:54:36
XY99_T1_TAB_BRI   P_T1_TAB_201805      11         68 XY99_TBS_INDX   26/10/2019 07:06:05
XY99_T1_TAB_BRI   P_T1_TAB_201806      12         69 XY99_TBS_INDX   26/10/2019 06:21:52
XY99_T1_TAB_BRI   P_T1_TAB_201807      13         69 XY99_TBS_INDX   26/10/2019 05:31:09
XY99_T1_TAB_BRI   P_T1_TAB_201808      14         69 XY99_TBS_INDX   26/10/2019 04:43:13
XY99_T1_TAB_BRI   P_T1_TAB_201809      15         72 XY99_TBS_INDX   26/10/2019 03:49:29
XY99_T1_TAB_BRI   P_T1_TAB_201810      16        200 XY99_TBS_INDX   26/10/2019 02:57:51
XY99_T1_TAB_BRI   P_T1_TAB_201811      17         70 XY99_TBS_INDX   26/10/2019 02:08:46
XY99_T1_TAB_BRI   P_T1_TAB_201812      18         70 XY99_TBS_INDX   26/10/2019 01:12:25
XY99_T1_TAB_BRI   P_T1_TAB_201901      19         70 XY99_TBS_INDX   26/10/2019 00:22:16
XY99_T1_TAB_BRI   P_T1_TAB_201902      20         72 XY99_TBS_INDX   25/10/2019 23:22:55
XY99_T1_TAB_BRI   P_T1_TAB_201903      21         73 XY99_TBS_INDX   25/10/2019 22:18:13
XY99_T1_TAB_BRI   P_T1_TAB_201904      22         72 XY99_TBS_INDX   25/10/2019 20:55:06
XY99_T1_TAB_BRI   P_T1_TAB_201905      23         72 XY99_TBS_INDX   25/10/2019 19:43:34
XY99_T1_TAB_BRI   P_T1_TAB_201906      24         70 XY99_TBS_INDX   25/10/2019 18:49:27
XY99_T1_TAB_BRI   P_T1_TAB_201907      25         70 XY99_TBS_INDX   25/10/2019 17:39:51
XY99_T1_TAB_BRI   P_T1_TAB_201908      26          0 XY99_TBS_INDX   25/10/2019 16:53:31 --> looks suspicious
                                          ----------
Total num_rows                                  1784

The 26th partition looks suspicious. For the sake of simplicity and quick fix, I asked the end user whether their process deals with the last created partition? and the answer was YES.

Then, I killed the session, gathered the statistics, and asked the end-user to relaunch the process

SQL> @GetPartTab
Enter value for owner: XYZDATA
Enter value for table_name: T1_TAB

TABLE_NAME    PNAME               PPOS   NUM_ROWS TABLESPACE_NAME  LAST_ANAL
------------- ----------------- ------ ---------- ---------------- -------------------
T1_TAB        P_T1_TAB_200001        1          0 XY99_TBS_DONN    26/10/2019 15:26:55
T1_TAB        P_T1_TAB_201708        2    3934618 XY99_TBS_DONN    26/10/2019 15:21:50
T1_TAB        P_T1_TAB_201709        3    3948128 XY99_TBS_DONN    26/10/2019 14:35:11
T1_TAB        P_T1_TAB_201710        4    4078620 XY99_TBS_DONN    26/10/2019 13:47:20
T1_TAB        P_T1_TAB_201711        5    4093099 XY99_TBS_DONN    26/10/2019 12:52:13
T1_TAB        P_T1_TAB_201712        6    4059354 XY99_TBS_DONN    26/10/2019 11:53:52
T1_TAB        P_T1_TAB_201801        7    4080096 XY99_TBS_DONN    26/10/2019 10:58:33
T1_TAB        P_T1_TAB_201802        8    4096904 XY99_TBS_DONN    26/10/2019 09:58:16
T1_TAB        P_T1_TAB_201803        9    4115548 XY99_TBS_DONN    26/10/2019 08:59:04
T1_TAB        P_T1_TAB_201804       10    4175894 XY99_TBS_DONN    26/10/2019 07:54:15
T1_TAB        P_T1_TAB_201805       11    4187923 XY99_TBS_DONN    26/10/2019 07:05:42
T1_TAB        P_T1_TAB_201806       12    4224300 XY99_TBS_DONN    26/10/2019 06:21:28
T1_TAB        P_T1_TAB_201807       13    4233116 XY99_TBS_DONN    26/10/2019 05:30:46
T1_TAB        P_T1_TAB_201808       14    4242821 XY99_TBS_DONN    26/10/2019 04:42:50
T1_TAB        P_T1_TAB_201809       15    4291935 XY99_TBS_DONN    26/10/2019 03:49:06
T1_TAB        P_T1_TAB_201810       16    2596486 XY99_TBS_DONN    26/10/2019 02:57:36
T1_TAB        P_T1_TAB_201811       17    4305042 XY99_TBS_DONN    26/10/2019 02:08:20
T1_TAB        P_T1_TAB_201812       18    4314987 XY99_TBS_DONN    26/10/2019 01:12:00
T1_TAB        P_T1_TAB_201901       19    4333586 XY99_TBS_DONN    26/10/2019 00:21:52
T1_TAB        P_T1_TAB_201902       20    4428940 XY99_TBS_DONN    25/10/2019 23:22:28
T1_TAB        P_T1_TAB_201903       21    4493339 XY99_TBS_DONN    25/10/2019 22:17:44
T1_TAB        P_T1_TAB_201904       22    4420400 XY99_TBS_DONN    25/10/2019 20:54:34
T1_TAB        P_T1_TAB_201905       23    4408412 XY99_TBS_DONN    25/10/2019 19:43:10
T1_TAB        P_T1_TAB_201906       24    4277786 XY99_TBS_DONN    25/10/2019 18:49:03
T1_TAB        P_T1_TAB_201907       25    4250320 XY99_TBS_DONN    25/10/2019 17:39:26
T1_TAB        P_T1_TAB_201908       26    4185925 XY99_TBS_DONN    04/11/2019 18:51:48
                                       ----------                  
Total num_rows                         103777579                   


SQL> @getPartInd
Enter value for owner: XYZDATA
Enter value for index_name: XY99_T1_TAB_BRI

INDEX_NAME        PNAME               PPOS   NUM_ROWS TABLESPACE_NAME   LAST_ANAL
----------------- ----------------- ------ ---------- ----------------- -------------------
XY99_T1_TAB_BRI   P_T1_TAB_200001        1          0 XY99_TBS_INDX     26/10/2019 15:26:56
XY99_T1_TAB_BRI   P_T1_TAB_201708        2         67 XY99_TBS_INDX     26/10/2019 15:22:13
XY99_T1_TAB_BRI   P_T1_TAB_201709        3         64 XY99_TBS_INDX     26/10/2019 14:35:34
XY99_T1_TAB_BRI   P_T1_TAB_201710        4         66 XY99_TBS_INDX     26/10/2019 13:47:40
XY99_T1_TAB_BRI   P_T1_TAB_201711        5         67 XY99_TBS_INDX     26/10/2019 12:52:32
XY99_T1_TAB_BRI   P_T1_TAB_201712        6         66 XY99_TBS_INDX     26/10/2019 11:54:15
XY99_T1_TAB_BRI   P_T1_TAB_201801        7         66 XY99_TBS_INDX     26/10/2019 10:58:57
XY99_T1_TAB_BRI   P_T1_TAB_201802        8         67 XY99_TBS_INDX     26/10/2019 09:58:44
XY99_T1_TAB_BRI   P_T1_TAB_201803        9         67 XY99_TBS_INDX     26/10/2019 08:59:31
XY99_T1_TAB_BRI   P_T1_TAB_201804       10         68 XY99_TBS_INDX     26/10/2019 07:54:36
XY99_T1_TAB_BRI   P_T1_TAB_201805       11         68 XY99_TBS_INDX     26/10/2019 07:06:05
XY99_T1_TAB_BRI   P_T1_TAB_201806       12         69 XY99_TBS_INDX     26/10/2019 06:21:52
XY99_T1_TAB_BRI   P_T1_TAB_201807       13         69 XY99_TBS_INDX     26/10/2019 05:31:09
XY99_T1_TAB_BRI   P_T1_TAB_201808       14         69 XY99_TBS_INDX     26/10/2019 04:43:13
XY99_T1_TAB_BRI   P_T1_TAB_201809       15         72 XY99_TBS_INDX     26/10/2019 03:49:29
XY99_T1_TAB_BRI   P_T1_TAB_201810       16        200 XY99_TBS_INDX     26/10/2019 02:57:51
XY99_T1_TAB_BRI   P_T1_TAB_201811       17         70 XY99_TBS_INDX     26/10/2019 02:08:46
XY99_T1_TAB_BRI   P_T1_TAB_201812       18         70 XY99_TBS_INDX     26/10/2019 01:12:25
XY99_T1_TAB_BRI   P_T1_TAB_201901       19         70 XY99_TBS_INDX     26/10/2019 00:22:16
XY99_T1_TAB_BRI   P_T1_TAB_201902       20         72 XY99_TBS_INDX     25/10/2019 23:22:55
XY99_T1_TAB_BRI   P_T1_TAB_201903       21         73 XY99_TBS_INDX     25/10/2019 22:18:13
XY99_T1_TAB_BRI   P_T1_TAB_201904       22         72 XY99_TBS_INDX     25/10/2019 20:55:06
XY99_T1_TAB_BRI   P_T1_TAB_201905       23         72 XY99_TBS_INDX     25/10/2019 19:43:34
XY99_T1_TAB_BRI   P_T1_TAB_201906       24         70 XY99_TBS_INDX     25/10/2019 18:49:27
XY99_T1_TAB_BRI   P_T1_TAB_201907       25         70 XY99_TBS_INDX     25/10/2019 17:39:51
XY99_T1_TAB_BRI   P_T1_TAB_201908       26         68 XY99_TBS_INDX     04/11/2019 18:52:14
                                           ----------
Total num_rows                                   1852

When this insert/select has been relaunched it has completed in less than one hour as shown by the corresponding SQL monitoring report:

Global Information
------------------------------
 Status                                 :  EXECUTING           
 Instance ID                            :  1                   
 Session                                :  XYZDATA (245:14813) 
 SQL ID                                 :  bsgj02zjrfdbj       
 SQL Execution ID                       :  16777216           
 Execution Started                      :  11/05/2019 10:02:17
 First Refresh Time                     :  11/05/2019 10:02:24
 Last Refresh Time                      :  11/05/2019 11:00:03
 Duration                               :  3466s     -------------------> The new execution time        
 Module/Action                          :  CCC_CCS/EXECUTION   
 Service                                :  MYAPL_1             
 Program                                :  JDBC Thin Client    
 PLSQL Entry Ids (Object/Subprogram)    :  10328167,7          
 PLSQL Current Ids (Object/Subprogram)  :  10328167,7          

-- The new execution plan
Global Stats
======================================================================
| Elapsed |   Cpu   |    IO    | Concurrency | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) |  Waits(s)   |  Gets  | Reqs | Bytes |
======================================================================
|  3766   |  607    |   3158   |         0.00|      5M|  219K|  142GB|
======================================================================

SQL Plan Monitoring Details (Plan Hash Value=2281702856)
================================================================================================
| Id   |               Operation                |      Name       |  Rows   | Execs |   Rows   |
|      |                                        |                 | (Estim) |       | (Actual) |
================================================================================================
|    0 | INSERT STATEMENT                       |                 |         |     1 |          |
|    1 |   LOAD TABLE CONVENTIONAL              |                 |         |     1 |          |
|    2 |    HASH JOIN OUTER                     |                 |      2M |     1 |        0 |
|    3 |     PARTITION LIST OR                  |                 |      2M |     1 |     971K |
|    4 |      TABLE ACCESS FULL                 | TT_XYZ_BAE      |      2M |     2 |     971K |
|    5 |     PARTITION RANGE ITERATOR           |                 |     104M|     1 |     100M |
|    6 |      TABLE ACCESS FULL                 | T1_TAB          |     104M|    26 |     100M |
================================================================================================

Bottom Line

This simple note shows how diagnosing and fixing a real-life performance issue can, sometimes, be accomplished in a couple of minutes (in this case the time fixing equals to the duration of the call to dbms_stats package). It will not be always as obvious as the current case tends to indicate but, I can say that I’ve very often spent more time answering, explaining, and writing on how I have fixed a performance issue than the time it took me to diagnose and fix this performance pain.

August 9, 2020

SET operation to join

Filed under: Oracle — hourim @ 10:06 am

In this article about semi-join in modern relational databases I’ve shown that, neither Oracle nor Postgres, are able to transform a SET operation like INTERSECT or MINUS into a SEMI-JOIN. Jonathan Lewis has, however, explained in a comment to this article, that Oracle has been, in fact, able to handle conversions from INTERSECT (and minus) to semi join (and anti-join) since 10g. But the conversion is only possible by changing the default value of the _convert_set_to_join parameter from FALSE to TRUE. Starting from 19.1.0.1 the conversion is now done automatically because Oracle has finally decided to let this transformation happening by default as outlined by Tanel Poder cofep script

SQL> @cofep 12.2.0.1 19.1.0.1
Compare Optimizer_Features_Enable Parameter differences
for values 12.2.0.1 and 19.1.0.1

PARAMETER              '12.2.0.1'    '19.1.0.1'   DESCRIPTION
---------------------- ------------- ------------ --------------------------------------------
_convert_set_to_join   FALSE         TRUE         enables conversion of set operator to join

Here’s below a simple demonstration

create table t1
as 
select
   rownum n1
  ,mod(rownum,5) n2
  ,trunc( (rownum -1/3)) n3
from
  dual
connect by
  level <= 1e4;  
  
create table t2
as 
select
   rownum n1
  ,mod(rownum,3) n2
  ,trunc( (rownum -1/5)) n3
from
  dual
connect by
  level <= 5;    

exec dbms_stats.gather_table_stats(user, ‘t1’) ;
exec dbms_stats.gather_table_stats(user, ‘t2’) ;
SQL> show parameter optimizer_features

PARAMETER_NAME                  TYPE        VALUE
------------------------------- ----------- ---------
optimizer_features_enable       string      19.1.0

SQL> @pd _convert_set
Show all parameters and session values from x$ksppi/x$ksppcv...

       NUM N_HEX NAME                  VALUE    DESCRIPTION
---------- ----- --------------------- -------- ------------------------------------------
      3718   E86 _convert_set_to_join  FALSE    enables conversion of set operator to join

SQL> select t2.n1 from t2 intersect select t1.n1 from t1;

        N1
----------
         1
         2
         3
         4
         5

SQL> select * from table(dbms_xplan.display_cursor);

----------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |
----------------------------------------------------
|   0 | SELECT STATEMENT    |      |       |       |
|   1 |  INTERSECTION       |      |       |       |
|   2 |   SORT UNIQUE       |      |     5 |    15 |
|   3 |    TABLE ACCESS FULL| T2   |     5 |    15 |
|   4 |   SORT UNIQUE       |      | 10000 | 40000 |
|   5 |    TABLE ACCESS FULL| T1   | 10000 | 40000 |
----------------------------------------------------

As you can see the intersect operation has not been transformed into a semi-join because the corresponding hidden parameter is by default set to FALSE.

But let’s now do the same experiment under the new optimizer 19.1.0.1 and check the difference:

SQL> alter session set optimizer_features_enable='19.1.0.1';

Session altered.

SQL> select t2.n1 from t2 intersect select t1.n1 from t1;

        N1
----------
         1
         2
         4
         5
         3

SQL> @xpsimp

SQL_ID  0atbzgvynmj51, child number 1
-------------------------------------
select t2.n1 from t2 intersect select t1.n1 from t1

Plan hash value: 3703458891
----------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |
----------------------------------------------------
|   0 | SELECT STATEMENT    |      |       |       |
|   1 |  HASH UNIQUE        |      |     5 |    35 |
|*  2 |   HASH JOIN SEMI    |      |     5 |    35 |
|   3 |    TABLE ACCESS FULL| T2   |     5 |    15 |
|   4 |    TABLE ACCESS FULL| T1   | 10000 | 40000 |
----------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access(SYS_OP_MAP_NONNULL("T2"."N1")=SYS_OP_MAP_NONNULL("T1"."N1"))

Spot now how the INTERSECT set operation has been automatically transformed into a HASH JOIN SEMI operation. This becomes possible by default because, starting from optimier_features_enable 19.1.0.1, the default value of the hidden parameter _convert_set_to_join is set to TRUE.

While I was trying to print out the execution plan non-sharing reason I spotted out something interesting:

SQL> @nonshared 0atbzgvynmj51
Show why existing SQL child cursors were not reused (V$SQL_SHARED_CURSOR)...

SQL_ID                        : 0atbzgvynmj51
ADDRESS                       : 00007FFB65E682E0
CHILD_ADDRESS                 : 00007FFB65E65D80
CHILD_NUMBER                  : 0
HASH_MATCH_FAILED             : Y
REASON                        : <ChildNode><ChildNumber>0</ChildNumber><ID>3</ID>
                                <reason>Optimizer mismatch(12)</reason><size>2x476</size>
								<optimizer_features_enable> 19.1.0  19.1.0.1  </optimizer_features_enable>
								<_optimizer_undo_cost_change> 19.1.0  19.1.0.1 </_optimizer_undo_cost_change>
								<_convert_set_to_join> false true              </_convert_set_to_join>
							    <_subquery_pruning_mv_enabled> false  true     </_subquery_pruning_mv_enabled>
							    <_optimizer_push_down_distinct> 0   5   </_optimizer_push_down_distinct>
							    <_fix_control_key> 0  260074645         </_fix_control_key>
							    <_px_shared_hash_join> false  true      </_px_shared_hash_join>
							    </ChildNode>
CON_ID                        : 1
-----------------
SQL_ID                        : 0atbzgvynmj51
ADDRESS                       : 00007FFB65E682E0
CHILD_ADDRESS                 : 00007FFB65C23F38
CHILD_NUMBER                  : 1
OPTIMIZER_MISMATCH            : Y
REASON                        :
CON_ID                        : 1
-----------------

PL/SQL procedure successfully completed.

It looks like Oracle is now showing all the new optimizer parameters that have changed between the optimizer used to compile child number 0 and that used to compile child cursor n°1. This is confirmed using Tanel Poder cofep script

SQL> @cofep 19.1.0 19.1.0.1
Compare Optimizer_Features_Enable Parameter differences
for values 19.1.0 and 19.1.0.1

PARAMETER                       '19.1.0'   '19.1.0.1' DESCRIPTION
------------------------------- ---------- ---------- ----------------------------------------------------------------------
_optimizer_push_down_distinct   0          5          push down distinct from query block to table
optimizer_features_enable       19.1.0     19.1.0.1   optimizer plan compatibility parameter
_optimizer_undo_cost_change     19.1.0     19.1.0.1   optimizer undo cost change
_subquery_pruning_mv_enabled    FALSE      TRUE       enable the use of subquery predicates with MVs to perform pruning
_convert_set_to_join            FALSE      TRUE       enables conversion of set operator to join
_px_shared_hash_join            FALSE      TRUE       enable/disable shared hash join

6 rows selected.

August 6, 2020

Historical column histogram

Filed under: Oracle — hourim @ 4:03 pm

Tracking column histogram modifications is not a common task among the Oracle DBA and developer’s sphere. And, why one would want to know the past column histogram values? In this blog post I intend to give a script serving the first purpose and show an example where this script can be of a great help.

1. Historical column histogram values

Here’s below the script which I have named h_hist1 (you will see later why I decided to use this script name):

/* ----------------------------------------------------------------------------------|
|Author : Mohamed Houri                                                              |
|Date   : 03/07/2020                                                                 |
|Scope  : This script gives historical column histogram values                       |
|          -- I am using sys.WRI$_OPTSTAT_HISTHEAD_HISTORY for this purpose			 |		
|          -- I am only able to say whether, previously, there was HISTOGRAM or not  |
|          -- I can't show the historical type of Histogram 				         |
|																					 |
|Usage  :  SQL> @h_hist1                                                             |
|			Enter value for table_name: t1                                           |
|			Enter value for owner: test                                              |
|			Enter value for col_name: n2     				                         |
-------------------------------------------------------------------------------------|*/
col object_name   	    format a20
col column_name  		format a12
col last_analyzed 		format a20
col prev_last_analyzed  format a20
col histogram           format a16
col prev_histogram      format a16
WITH sq AS 
    (
     SELECT
	      object_id
		 ,object_name
		 ,subobject_name
	 FROM
	     dba_objects
	 WHERE
	     object_name    = upper ('&&table_name')
	 AND owner          = upper('&&owner')
	 AND subobject_name IS NULL
	 )
SELECT
	 object_name
	,column_name
	,lead(prev_histogram,1,histogram) over (order by last_analyzed) histogram
	,last_analyzed
	,prev_histogram
	,prev_last_analyzed
FROM
   (
     SELECT
	     object_name
		,column_name
		,(select histogram from all_tab_col_statistics where owner = upper('&&owner') 
		  and table_name = upper('&&table_name') and column_name = upper('&&col_name')) histogram
		,last_analyzed
		,stat_time prev_last_analyzed
		,row_number() over (order by last_analyzed) rn
		,case when round(derivedDensity,9)= round(density,9) then 'NONE' else 'HISTOGRAM' end prev_histogram
	 FROM
	    (
		 SELECT
		     object_name
			,column_name
			,to_char(savtime ,'dd/mm/yyyy hh24:mi:ss')     last_analyzed
			,to_char(timestamp# ,'dd/mm/yyyy hh24:mi:ss') stat_time
			,density
			,1/distcnt derivedDensity
			,row_number() over (order by savtime) rn
			,lag(case when round(1/distcnt,9) = round(density,9) then 'NONE' else 'HISTOGRAM' end) over(order by savtime) hist_histogram
		 FROM
		    sys.WRI$_OPTSTAT_HISTHEAD_HISTORY
			INNER JOIN sq ON object_id = obj#
			INNER JOIN (SELECT 
			                column_id
						   ,column_name
						FROM
						    dba_tab_columns
						WHERE
						    column_name = upper('&&col_name')
						AND table_name  = upper('&&table_name') 
			            AND owner       = upper('&&owner')
						) ON intcol# = column_id
	)
WHERE
   rn >= 1 --exlcude/include the very first dbms_stat
   )
ORDER BY
    last_analyzed;

And here’s below a simple illustration usage:

SQL> exec dbms_stats.gather_table_stats(user, 't1', method_opt => 'for all columns size auto');
SQL> exec dbms_stats.gather_table_stats(user, 't1', method_opt => 'for all columns size 1');
SQL> exec dbms_stats.gather_table_stats(user, 't1', method_opt => 'for all columns size 3');

SQL> @h_hist1
Enter value for table_name: t1
Enter value for owner: c##mhouri
Enter value for col_name: n2

OBJECT_NAME  COLUMN_NAME  HISTOGRAM        LAST_ANALYZED        PREV_HISTOGRAM   PREV_LAST_ANALYZED
------------ ------------ ---------------- -------------------- ---------------- --------------------
T1           N2           HISTOGRAM        06/08/2020 15:39:00  HISTOGRAM        25/06/2020 18:42:06
T1           N2           NONE             06/08/2020 15:43:58  HISTOGRAM        06/08/2020 15:38:59
T1           N2           TOP-FREQUENCY    06/08/2020 16:03:19  NONE             06/08/2020 15:43:58

As you can see we went from HISTOGRAM on August the 6th at 15:39 to NONE at 15:43 then to TOP-FREQUENCY at 16:03. If I gather again statistics on the same table then here’s what the h_hist1 script will show:

SQL> exec dbms_stats.gather_table_stats(user, 't1', method_opt => 'for all columns size auto');
SQL> @h_hist1

OBJECT_NAME COLUMN_NAME  HISTOGRAM        LAST_ANALYZED        PREV_HISTOGRAM   PREV_LAST_ANALYZED
----------- ------------ ---------------- -------------------- ---------------- --------------------
T1          N2           HISTOGRAM        06/08/2020 15:39:00  HISTOGRAM        25/06/2020 18:42:06
T1          N2           NONE             06/08/2020 15:43:58  HISTOGRAM        06/08/2020 15:38:59
T1          N2           HISTOGRAM        06/08/2020 16:03:19  NONE             06/08/2020 15:43:58
T1          N2           FREQUENCY        06/08/2020 16:23:00  HISTOGRAM        06/08/2020 16:03:18

While I lost the histogram type (TOP-FREQENCY) as I go back through the historical path of the n2 column, I am, however, able to say whether, previously, this column was having a histogram or not.

When you Google using the words “historical histogram Oracle”, the first hit is a blog post by Nigel Bayliss: How do I get histogram history? The script shared by Nigel has been named h_hist. Here’s below the output of this script when applied to the current t1/n2 case:

SQL> @h_hist t1 c##mhouri
Table : C##MHOURI.T1

Column: N2                                         Last analyzed: 2020-06-08 16:22:59 [Current: FREQUENCY]
-     2020-06-08 15:39:00     0 ->     5 buckets CHANGE
-     2020-06-08 15:43:58     5 ->     0 buckets CHANGE
-     2020-06-08 16:03:19     0 ->     3 buckets CHANGE
-     2020-06-08 16:22:59     3 ->     5 buckets CHANGE

PL/SQL procedure successfully completed.

It coincides perfectly with my script but, for practical reasons, I do prefer using my script 🙂

2. When you might need this script?

Here’s an example to illustrate the utility of the h_hist1 script

SQL> select count(1) from t where n2= 100;

  COUNT(1)
----------
       100

SQL_ID  2k5g2apy78hj8, child number 1
-------------------------------------
Plan hash value: 1882749816
------------------------------------------------------
| Id  | Operation         | Name     | Rows  | Bytes |
------------------------------------------------------
|   0 | SELECT STATEMENT  |          |       |       |
|   1 |  SORT AGGREGATE   |          |     1 |     3 |
|*  2 |   INDEX RANGE SCAN| T_ACS_I1 |   100 |   300 |
------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("N2"=:SYS_B_1)

SQL> select count(1) from t where n2= 1e6;

  COUNT(1)
----------
   1099049

SQL_ID  2k5g2apy78hj8, child number 2
-------------------------------------
Plan hash value: 2966233522
---------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |
---------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |
|   1 |  SORT AGGREGATE    |      |     1 |     3 |
|*  2 |   TABLE ACCESS FULL| T    |  1099K|  3219K|
---------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("N2"=:SYS_B_1)

We have a query switching from a FULL TABLE SCAN to an INDEX RANGE SCAN depending on the number of rows processed by each bind variable value. So far so good until you are asked to trouble shoot a performance issue caused by this query since it is sticking into an INDEX RANGE SCAN execution plan whatever the bind variable value is

SQL> select count(1) from t where n2= 1e6;

  COUNT(1)
----------
   1099049

------------------------------------------------------
| Id  | Operation         | Name     | Rows  | Bytes |
------------------------------------------------------
|   0 | SELECT STATEMENT  |          |       |       |
|   1 |  SORT AGGREGATE   |          |     1 |     3 |
|*  2 |   INDEX RANGE SCAN| T_ACS_I1 |   240K|   703K|
------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("N2"=:SYS_B_1)

This performance pain occurred because someone somewhere has deleted the histogram of column n2 from table t as the following proves:

SQL> @h_hist1

OBJECT_NAME COLUMN_NAME  HISTOGRAM        LAST_ANALYZED        PREV_HISTOGRAM   PREV_LAST_ANALYZED
----------- ------------ ---------------- -------------------- ---------------- --------------------
T           N2           HISTOGRAM        06/08/2020 17:10:44  HISTOGRAM
T           N2           NONE             06/08/2020 17:32:45  HISTOGRAM        06/08/2020 17:10:43 

Thanks to the FREQUENCY histogram on the column n2, the previous cursor was bind sensitive and bind aware allowing, as such, the query to have the best execution plan possible per regards to the bind variable value. But as soon as the n2 column has lost its histogram and the cursor has been flushed from memory, the new cursor was neither bind sensitive nor bind aware explaining why the corresponding query was sticking in the INDEX RANGE SCAN execution plan.

select
   dbms_stats.report_stats_operations(since => sysdate -1) text_line
from dual;

----------------------------------------------------------------------------------------
| Operation Id | Operation              | Target              | Start Time             |
----------------------------------------------------------------------------------------
| 5219         | gather_table_stats     | "C##MHOURI"."T"     | 06/08/20               |
|              |                        |                     | 17:32:45,333000 +02:00 |
----------------------------------------------------------------------------------------

select
   dbms_stats.report_single_stats_operation
      (opid         => 5219
      ,detail_level => 'ALL')
from dual;

---------------------------------------------------------------------------------------------
| Operation | Operation          | Target          | Session | Additional Info              |
| Id        |                    |                 | Id      |                              |
---------------------------------------------------------------------------------------------
| 5219      | gather_table_stats | "C##MHOURI"."T" | 395     | Parameters: [block_sample:   |
|           |                    |                 |         | FALSE] [cascade: NULL]       |
|           |                    |                 |         | [concurrent: FALSE] [degree: |
|           |                    |                 |         | NULL] [estimate_percent:     |
|           |                    |                 |         | DBMS_STATS.AUTO_SAMPLE_SIZE] |
|           |                    |                 |         | [force: FALSE] [granularity: |
|           |                    |                 |         | AUTO] [method_opt: for all   |
|           |                    |                 |         | columns size 1]              |
|           |                    |                 |         | [no_invalidate: NULL]        |
|           |                    |                 |         | [ownname: C##MHOURI]         |
|           |                    |                 |         | [partname: ]                 |
|           |                    |                 |         | [reporting_mode: FALSE]      |
|           |                    |                 |         | [statid: ] [statown: ]       |
|           |                    |                 |         | [stattab: ] [stattype: DATA] |
|           |                    |                 |         | [tabname: t]                 |
---------------------------------------------------------------------------------------------

Spot how the last dbms_stat run (operation id n°5219) did not gather histogram because it used:

[method_opt: for all columns size 1] 

3. Conclusion

Tracking column histogram modifications can be very handy in certain circumstances. The h_hist1 script provided in this blog post could help in such case of situations

August 5, 2020

On the impact of stale statistics when enabling Foreign keys

Filed under: Oracle,Statistics — hourim @ 12:55 pm

If you are going to enable a foreign key integrity constraint, then make sure you have fresh and representative statistics on both child and parent table ends. If this is not so, then you might lose a lot of time enabling your constraint because of a non-easy to spot recursive query.

Here’s below a summary of a real life 11.2.0.4 case:

SQL Text
------------------------------
BEGIN XZA_DDL.ENABLE_FK(owner => 'ABCDXY', nomtab => 'TABLE_ABC_C', nomfk => NULL); END;

Global Information
------------------------------
 Status              :  EXECUTING           
 Instance ID         :  1                   
 Session             :  XYZ_ABC(13:25335)  
 SQL ID              :  46bn1bvfkkpvn       
 SQL Execution ID    :  16777216            
 Execution Started   :  01/13/2020 16:54:37 
 First Refresh Time  :  01/13/2020 16:54:43 
 Last Refresh Time   :  01/14/2020 15:23:35 
 Duration            :  80939s              
 Module/Action       :  SQL Developer/-     
 Service             :  SYS$USERS           
 Program             :  SQL Developer       

Global Stats
===========================================================================================
| Elapsed |   Cpu   |    IO    | Concurrency | PL/SQL  |  Other   | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) |  Waits(s)   | Time(s) | Waits(s) |  Gets  | Reqs | Bytes |
===========================================================================================
|   80937 |   79809 |     3.27 |        1.74 |    0.00 |     1124 |     2G |  215 |  21MB |
===========================================================================================

The very long enabling FK process was still in EXECUTING status after more than 22 hours as shown above. Let’s check what ASH can tell us about this situation:

SQL> select sql_id, count(1)
  2  from gv$active_session_history
  3  where
  4      sample_time between to_date('14012020 12:00:00', 'ddmmyyyy hh24:mi:ss')
  5                  and     to_date('14012020 15:45:00', 'ddmmyyyy hh24:mi:ss')
  6  group by sql_id
  7  order by 2 desc
  8  ;

SQL_ID          COUNT(1)
------------- ----------
                   17801
608srf0vf5f3q      12937 -->  alter table ABCDXY. TABLE_ABC_C
a8gtvr24afy70      10919
1vtd595xys9fp       9135
7q4kq2auz89x1       4203
fdxa2ph5250x1       3058

Manifestly it seems that we must focus our attention on the 608srf0vf5f3q SQL_ID.

From a wait event point of view, ASH shows this:

SQL> select event, count(1)
  2  from gv$active_session_history
  3  where
  4      sample_time between to_date('14012020 12:00:00', 'ddmmyyyy hh24:mi:ss')
  5                  and     to_date('14012020 15:45:00', 'ddmmyyyy hh24:mi:ss')
  6  group by event
  7  order by 2 desc;

EVENT                      COUNT(1)
------------------------ ----------
                              48756
row cache lock                12194
db file parallel write         6867
db file scattered read         6519
db file sequential read        5356
direct path read               1794
…/../

A lot of CPU burning followed by the row cache lock wait event. However, the SQL_ID 608srf0vf5f3q is reported to be only burning CPU as the followings proves:

SQL> select event, count(1)
  2  from gv$active_session_history
  3  where
  4      sample_time between to_date('14012020 12:00:00', 'ddmmyyyy hh24:mi:ss')
  5                  and     to_date('14012020 15:45:00', 'ddmmyyyy hh24:mi:ss')
  6  and sql_id = '608srf0vf5f3q'
  7  group by event
  8  order by 2 desc;

EVENT     COUNT(1)
------- ----------
             12937

Since the enabling process was still running I decided to “snap’’ the SID (13) of the PL/SQL stored package which I got from the corresponding gv$sql_monitor

SQL> @snapper all 5 1 13
Sampling SID 13 with interval 5 seconds, taking 1 snapshots...

-- Session Snapper v4.26 - by Tanel Poder ( http://blog.tanelpoder.com/snapper ) 
- Enjoy the Most Advanced Oracle Troubleshooting Script on the Planet! 

------------------------------------------------------------------------------------------------
SID, USERNAME, TYPE, STATISTIC                            ,         DELTA, HDELTA/SEC,    %TIME,
------------------------------------------------------------------------------------------------
 13, XYZ_ABC, STAT, session logical reads                ,        142350,     27.03k,         ,
 13, XYZ_ABC, STAT, consistent gets                      ,        142351,     27.03k,         ,
 13, XYZ_ABC, STAT, consistent gets from cache           ,        142352,     27.03k,         ,
 13, XYZ_ABC, STAT, consistent gets from cache (fastpath),        142352,     27.03k,         ,
 13, XYZ_ABC, STAT, logical read bytes from cache        ,    1166147584,    221.41M,         , --> this looks interesting
 13, XYZ_ABC, STAT, calls to kcmgcs                      ,            22,       4.18,         ,
 13, XYZ_ABC, STAT, no work - consistent read gets       ,        142334,     27.02k,         ,
 13, XYZ_ABC, STAT, table scans (short tables)           ,             3,        .57,         ,
 13, XYZ_ABC, STAT, table scan rows gotten               ,      23484525,      4.46M,         ,
 13, XYZ_ABC, STAT, table scan blocks gotten             ,        142335,     27.02k,         ,
 13, XYZ_ABC, STAT, buffer is pinned count               ,             3,        .57,         ,
 13, XYZ_ABC, TIME, DB CPU                               ,       6003174,      1.14s,   114.0%,
 13, XYZ_ABC, TIME, sql execute elapsed time             ,       6003624,      1.14s,   114.0%,
 13, XYZ_ABC, TIME, DB time                              ,       6003624,      1.14s,   114.0%,

--  End of Stats snap 1, end=2020-01-14 15:30:20, seconds=5.3

----------------------------------------------------------------------------------
  ActSes   %Thread | INST | SQL_ID          | SQL_CHILD | EVENT  | WAIT_CLASS
----------------------------------------------------------------------------------
    1.00    (100%) |    1 | 608srf0vf5f3q   | 0         | ON CPU | ON CPU

--  End of ASH snap 1, end=2020-01-14 15:30:20, seconds=5, samples_taken=48, AAS=1

Nonchalantly, all the investigations are showing that the culprit query is 608srf0vf5f3q which is burning a lot of CPU. The 221 M of logical read bytes from cache could very well correlate with high CPU consumption. But we still don’t know why and who is reading those hundreds of millions of logical reads from cache?

In addition, the ash_wait_chain script applied to this SQL_ID 608srf0vf5f3q shows that this query is not blocked by any other process:

SQL> @ash_wait_chains event2 sql_id='608srf0vf5f3q' "timestamp'2020-01-14 12:00:00'" "timestamp'2020-01-14 15:45:00'"

-- Display ASH Wait Chain Signatures script v0.4 BETA by Tanel Poder ( http://blog.tanelpoder.com )

%This     SECONDS     AAS WAIT_CHAIN    LAST_SEEN
------ ---------- ------- ------------- --------------------
 100%       12937     1.0 -> ON CPU     2020-01-14 15:44:59

That’s weird. Isn’t it?

After a couple of minutes going round and round in circles, I decided to use the brute force by:

  • Killing the process
  •  Activating the 10046 trace
  •  Running the process for a couple of minutes
  •  Stopping again the process
  •  Analyzing the trace file
alter session set tracefile_identifier = EnablingForeignKey;
@46on 12
exec XZA_DDL.ENABLE_FK(owner => 'ABCDXY', nomtab => 'TABLE_ABC_C', nomfk => NULL);
@46off

And here’s below what I found in the TKPROFED generated trace file

SQL ID: 56tj8jdcw15jb Plan Hash: 2257352843

select /*+ all_rows ordered dynamic_sampling(2) */ A.rowid, :1, :2, :3 
from
 "ABCDXY"."TABLE_ABC_C" A , "ABCDXY"."TABLE_PARENT_ABC" B where( 
  "A"."COL_1" is not null and "A"."COL_2" is not null and "A"."COL_3" 
  is not null and "A"."COL_4" is not null) and( "B"."COL_1" (+)= 
  "A"."COL_1" and "B"."COL_2" (+)= "A"."COL_2" and "B"."COL_3" (+)=
   "A"."COL_3" and "B"."COL_4" (+)= "A"."COL_4") and( "B"."COL_1" is 
  null or "B"."COL_2" is null or "B"."COL_3" is null or "B"."COL_4" is 
  null)


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1    217.83     224.03          0    6179071          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3    217.83     224.03          0    6179071          0           0

This recursive query was enough for me to understand and fix this issue. I’ve then started by checking the historical execution of this recursive query as shown below:

SQL> @SqlHistStat
Enter value for sql_id: 56tj8jdcw15jb
Enter value for from_date: 01012020

SNAP_BEGIN                PLAN_HASH_VALUE    EXECS END_OF_FETCH  AVG_ETIME     AVG_LIO   AVG_ROWS
------------------------- --------------- -------- ------------ ----------  ---------- ----------
13-JAN-20 04.00.44.563 PM      2257352843        1            0        381    10934401          0
13-JAN-20 05.00.56.481 PM      2257352843        0            0       3551   104544327          0
13-JAN-20 06.00.05.981 PM      2257352843        0            0       3609   104844651          0
13-JAN-20 07.00.15.180 PM      2257352843        0            0       3619    84153912          0
13-JAN-20 08.00.29.468 PM      2257352843        0            0       3610   104607796          0
13-JAN-20 09.00.38.801 PM      2257352843        0            0       3619    93027006          0
13-JAN-20 10.00.49.488 PM      2257352843        0            0       3595   102089647          0
13-JAN-20 11.00.59.446 PM      2257352843        0            0       3545    99557582          0
14-JAN-20 12.00.09.039 AM      2257352843        0            0       3609   104246681          0
14-JAN-20 01.00.18.823 AM      2257352843        0            0       3608   106861170          0
14-JAN-20 02.00.28.120 AM      2257352843        0            0       3611   103242627          0
14-JAN-20 03.00.37.711 AM      2257352843        0            0       3610   106157348          0
14-JAN-20 04.00.47.161 AM      2257352843        0            0       3608   103770559          0
14-JAN-20 05.00.56.531 AM      2257352843        0            0       3547   103304809          0
14-JAN-20 06.00.05.487 AM      2257352843        0            0       3609   105243323          0
14-JAN-20 07.00.14.675 AM      2257352843        0            0       3608   107326829          0
14-JAN-20 08.00.23.442 AM      2257352843        0            0       3608   108137854          0
14-JAN-20 09.00.32.133 AM      2257352843        0            0       3609   107183542          0
14-JAN-20 10.00.41.111 AM      2257352843        0            0       3610   104855775          0
14-JAN-20 11.00.50.825 AM      2257352843        0            0       3554   100223228          0
14-JAN-20 12.00.00.751 PM      2257352843        0            0       3610    99540252          0
14-JAN-20 01.00.11.282 PM      2257352843        0            0       3614   101153086          0
14-JAN-20 02.00.21.974 PM      2257352843        0            0       3610    98943412          0
14-JAN-20 03.00.32.525 PM      2257352843        1            0        436    12358214          0

24 rows selected.

As you can see, through the END_OF_FETCH column, this query has spanned 24 one-hour snapshots without completing. It uses the following execution plan:

--------------------------------------------------------------------------------
| Id  | Operation                  | Name              | Rows  | Pstart| Pstop |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                   |       |       |       |
|*  1 |  FILTER                    |                   |       |       |       |
|   2 |   NESTED LOOPS OUTER       |                   |     1 |       |       |
|   3 |    PARTITION RANGE ALL     |                   |     1 |     1 |     2 |
|   4 |     TABLE ACCESS FULL      | TABLE_ABC_C       |     1 |     1 |     2 |
|   5 |    PARTITION RANGE ITERATOR|                   |     1 |   KEY |   KEY |
|*  6 |     TABLE ACCESS FULL      | TABLE_PARENT_ABC  |     1 |   KEY |   KEY |
--------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(("B"."COL_1" IS NULL OR "B"."COL_2" IS NULL OR "B"."COL_3" IS NULL OR
              "B"."COL_4" IS NULL))
   6 - filter(("B"."COL_1"="A"."COL_1" AND "B"."COL_2"="A"."COL_2" AND
               "B"."COL_3"="A"."COL_3" AND "B"."COL_4"="A"."COL_4"))

I don’t know why whenever I have to fix a very long non-ending process, I almost always end up by finding that a NESTED LOOPS is for something in this pain 😊

It is clear now that the TABLE_ABC_C child table and its parent TABLE_PARENT_ABC need to have fresh statistics before relaunching the FK enabling process. I always keep saying that, when an estimated cardinality of 1 is reported in the execution plan, then there is a big change that this is due to a non-fresh or non-representative statistic.

SQL> exec dbms_stats.gather_table_stats ('ABCDXY', 'TABLE_ABC_C', cascade => true,         no_invalidate => false);
SQL> exec dbms_stats.gather_table_stats ('ABCDXY', 'TABLE_PARENT_ABC', cascade => true, no_invalidate => false);

SQL> @HistStat
Enter value for owner: ABCDXY
Enter value for table_name: TABLE_ABC_C

OBJECT_NAME    OBJECT_TYPE  PREV_STAT_TIME           ROWCNT SAMPLESIZE ESTIMATE_PCT
-------------- ------------ -------------------- ---------- ---------- ------------
TABLE_ABC_C  TABLE       
TABLE_ABC_C  TABLE        09/01/2020 16:59:25           0          0            0
TABLE_ABC_C  TABLE        14/01/2020 16:34:17      946798     946798          100
TABLE_ABC_C  TABLE        14/01/2020 17:09:02      946798     946798          100

SQL> @HistStat
Enter value for owner: ABCDXY
Enter value for table_name: TABLE_PARENT_ABC

OBJECT_NAME         OBJECT_TYPE  PREV_STAT_TIME        ROWCNT SAMPLESIZE ESTIMATE_PCT
------------------- ------------ -------------------- ------- ---------- ------------
TABLE_PARENT_ABC   TABLE       
TABLE_PARENT_ABC   TABLE        09/01/2020 16:59:25        0          0            0
TABLE_PARENT_ABC   TABLE        14/01/2020 16:34:43  9032840    9032840          100
TABLE_PARENT_ABC   TABLE        14/01/2020 17:08:52  9032840    9032840          100

And finally, the coast is clear to re-enable the foreign key in about 10 seconds as shown below:

SQL> exec DBA_DDL.ENABLE_FK(owner => 'ABCDXY', nomtab => 'TABLE_ABC_C', nomfk => NULL);

PL/SQL procedure successfully completed.

Elapsed: 00:00:10.29 --> 10 seconds!

This has been possible because the new execution plan switched from that dramatic NL to a more suitable HASH JOIN as shown below:

SQL_ID  56tj8jdcw15jb, child number 0
-------------------------------------
Plan hash value: 1651427782
------------------------------------------------------------------------------------------
| Id  | Operation            | Name              | Rows  | Bytes |TempSpc| Pstart| Pstop |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |                   |       |       |       |       |       |
|   1 |  PARTITION RANGE ALL |                   |     1 |    90 |       |     1 |     3 |
|*  2 |   FILTER             |                   |       |       |       |       |       |
|*  3 |    HASH JOIN OUTER   |                   |     1 |    90 |    18M|       |       |
|   4 |     TABLE ACCESS FULL| TABLE_ABC_C       |   946K|    46M|       |     1 |     3 |
|   5 |     TABLE ACCESS FULL| TABLE_PARENT_ABC  |  9032K|   335M|       |     1 |     3 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("B"."COL_1" IS NULL OR "B"."COL_2" IS NULL OR "B"."COL_3" IS NULL 
              OR "B"."COL_4" IS NULL))
   3 - access("B"."COL_4"="A"."COL_4" AND "B"."COL_1"="A"."COL_1" AND
              "B"."COL_2"="A"."COL_2" AND "B"."COL_3"="A"."COL_3")

SQL> @sqlstats
Enter value for sql_id: 56tj8jdcw15jb

     CHILD PLAN_HASH_VALUE   AVG_GETS   AVG_PIOS  AVG_ETIME     EXECS
---------- --------------- ---------- ---------- ---------- ---------
         0      1651427782     133519     109286  27.018412         1

Bottom Line

Regardless of the kind of Oracle SQL operations you are executing you should always make sure you have given, beforehand, Oracle enough statistics information about your table/index/column. If this is not so, then be sure that you will certainly end up by stopping to be lucky and by facing a performance trouble. And, in the case of enabling Foreign key constraint, this performance pain will be exacerbated by a recursive query involving the parent-child tables. If this recursive query has a wrong execution plan because of lack of statistics, then it will drastically delay the FK enabling execution time and it will be very hard to point out that the root cause of the performance pain is due to that recursive query.

Next Page »

Create a free website or blog at WordPress.com.

Tony's Oracle Tips

Tony Hasler's light hearted approach to learning about Oracle

Richard Foote's Oracle Blog

Focusing Specifically On Oracle Indexes, Database Administration and Some Great Music

Hatem Mahmoud Oracle's blog

Just another Oracle blog : Database topics and techniques

Mohamed Houri’s Oracle Notes

Qui se conçoit bien s’énonce clairement

Oracle Diagnostician

Performance troubleshooting as exact science

Raheel's Blog

Things I have learnt as Oracle DBA

Coskan's Approach to Oracle

What I learned about Oracle

So Many Oracle Manuals, So Little Time

“Books to the ceiling, Books to the sky, My pile of books is a mile high. How I love them! How I need them! I'll have a long beard by the time I read them”—Lobel, Arnold. Whiskers and Rhymes. William Morrow & Co, 1988.

Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Oracle Scratchpad

Just another Oracle weblog

OraStory

Dominic Brooks on Oracle Performance, Tuning, Data Quality & Sensible Design ... (Now with added Sets Appeal)