Mohamed Houri’s Oracle Notes

November 28, 2015

All on Adaptive and Extended Cursor Sharing presentation

Filed under: Oracle — hourim @ 8:26 am

Here’s a presentation I did 3 days ago (26/11/2015) for French Oracle Meetup (Lille). It gives details on how Oracle (Adaptive Cursor Sharing part) monitors a bind sensitive cursor before making it bind aware and how Oracle (Extended Cursor Sharing part) monitors a bind aware cursor before optimizing a new optimal execution plan or sharing an existing one

October 1, 2015

Oracle Optimizer and SPM plan interaction

Filed under: Oracle — hourim @ 5:25 pm

Continuing in the inspiration instilled into me by Dominic Brooks’ post on SQL Plan Management choices, I decided to picture the Oracle CBO behavior in presence of enabled and accepted SPM plan(s) baseline:

CBO-SPM interaction diagram

The right part of the picture, when triggered, demonstrates the parsing penalty you will have to pay before running your SQL query. Particularly when there are multiple accepted and enabled SPM plans the CBO has to try reproducing and costing all of them before making its final decision. The picture also shows that under all circumstances the CBO will start first by compiling its execution plan as if it is not constrained by any SPM plan. This clearly demonstrates that if your query is suffering from a hard parsing execution time (when the plan generation takes a lot of time) then SPM will not help you. This is where the mantra “When you can hint it then Baseline it” ceases to be accurate.

September 18, 2015

Basic versus OLTP compression

Filed under: Oracle — hourim @ 11:53 am

If you are going to archive a table having a very large amount of data from a running system into a historical table and you expect from this archived table to be read-only then you have better to use the free BASIC compression mode instead of the paid OLTP mode. This article aims to show you the reason for this compression mode preference based on real life example.

In order to take advantage from the free BASIC compression mode you need first to insert data using a direct path load (this is why this compression mode used to be known as compression for direct path load operations or bulk load operations) and ensure that there is no trigger or integrity constraint on the inserted table that will make Oracle silently ignoring your requested direct path load mode. In passing, in contrast to the preceding Oracle releases, 12.1.0.2 Oracle database will inform you when a direct path is ignored via the Note at the bottom of the corresponding insert execution plan:

Note
-----
   - PDML disabled because triggers are defined
   - Direct Load disabled because triggers are defined

Here’s an example of BASIC compression I’ve executed on a 12.1.0.2 database?

SELECT  
    t.owner,
    t.table_name,
    t.compress_for,
    s.bytes/1024/1024/1024 GB
FROM 
    dba_tables t ,
    dba_segments s
WHERE 
    s.segment_name = t.table_name
AND s.owner        = 'XXX'
AND t.table_name   = 'T1'
ORDER BY 4;
   
OWNER      TABLE_NAME   COMPRESS_FOR    GB
---------- ------------ ------------- ----------
XXX        T1                         263.114136 

263 GB worth of data of which we need to keep only 2 years and archive the rest.

The first step in this process was to create an empty table cloned from the production table and define it to accept the BASIC mode compression:

-- create a table compressed with BASIC mode
CREATE TABLE T1_HIST
      COMPRESS BASIC
      TABLESPACE HIST_TABSPACE 
         AS
      SELECT * FROM T1
      WHERE 1 = 2;   

As far as I was going to send a huge amount of data into this archived table I decided to use a parallel insert which needs to be preceded by enabling parallel DML either by altering the session or by using the appropriate hint (for 12c)

-- enable parallel dml 
SQL> alter session enable parallel dml;        

The way is now paved to start sending historical data into their new destination:

-- direct path load data older than 2 years into the archived table
INSERT /*+ append parallel(bsic,4) */ 
  INTO T1_HIST bsic
  SELECT
          /*+ parallel(oltp,4) */
              *
    FROM  T1 oltp
       WHERE HIST_DAT > add_months(sysdate,-24);

2,234,898,367 rows created.

More than 2 billion of rows direct path loaded.

And the agreeable surprise is:

SQL> SELECT
    t.owner,
    t.table_name,
    t.compress_for,
    s.bytes/1024/1024/1024 GB
FROM
    dba_tables t ,
    dba_segments s
WHERE
    t.compress_for ='BASIC'
AND
    s.segment_name   = t.table_name
AND s.owner       = 'XXX'
ORDER BY 4;

OWNER      TABLE_NAME   COMPRESS_FOR   GB
---------- ------------ ------------- ----------
XXX         T1_HIST     BASIC         53.2504272 

We went from a source table with 263GB worth of data to a cloned table compressed using BASIC mode to end up with only 53GB or historical data.

If you are wondering how much rows I have not send into the archived table then here’s

SQL> select /*+ parallel(8) */ count(1) 
     from T1
     where HIST_DAT <= add_months(sysdate,-24);
   
7,571,098 

This means that using the BASIC free compression mode I have archived almost all rows from the production table and succeeded to pack the initial 263GB into 53GB only. That’s an excellent compression ratio of 5 (we have divided the initial size by 5). Though that Oracle is saying that the compression ration depends on the nature of your data and it could range between a factor of 2x to 4x.

Should you have used the paid OLTP compression mode you would have got an archived table with a size approximatively 10% higher (~60GB). This is due to Oracle considering the table compressed in BASIC mode to be read only and not subject to any update with a default PCT_FREE set to 0 behind the scene:

SQL> select table_name, pct_free
    from dba_tables
    where table_name = 'T1_HIST';

TABLE_NAME    PCT_FREE
----------- ----------
T1_HIST          0 

As you can see, if you intend to archive a huge amount of data into a read only table, want to gain disk space with this operation and you don’t want to pay for the OLTP compression then you can opt for the free BASIC compression mode.

There are few interesting things that come up along with this archiving process. Particularly the new 12c way (HYBRID TSM/HWMB) Oracle is using to keep down the number of new extents and to avoid HV enqueue wait event during parallel direct path load with a high DOP across several instances:

SQL Plan Monitoring Details (Plan Hash Value=3423860299)
====================================================================================
| Id |              Operation               |   Name   |  Rows   |Execs |   Rows   |
|    |                                      |          | (Estim) |      | (Actual) |
====================================================================================
|  0 | INSERT STATEMENT                     |          |         |    5 |        8 |
|  1 |   PX COORDINATOR                     |          |         |    5 |        8 |
|  2 |    PX SEND QC (RANDOM)               | :TQ10000 |    819M |    4 |        8 |
|  3 |     LOAD AS SELECT (HYBRID TSM/HWMB) |          |         |    4 |        8 |
|  4 |      OPTIMIZER STATISTICS GATHERING  |          |    819M |    4 |       2G |
|  5 |       PX BLOCK ITERATOR              |          |    819M |    4 |       2G |
|  6 |        TABLE ACCESS FULL             | T1       |    819M |  442 |       2G |
====================================================================================

Predicate Information (identified by operation id):
---------------------------------------------------
 
   6 - access(:Z>=:Z AND :Z<=:Z) filter("HIST_DAT">ADD_MONTHS(SYSDATE@!,-24))

Note
-----
   - Degree of Parallelism is 4 because of table property

But this will be detailed in a separate blog article.

September 5, 2015

Bind aware secret sauce (again)

Filed under: Oracle — hourim @ 8:21 am

I am sure many of you have already become bored by my posts on adaptive cursor sharing. I hope this article will be the last one :-). In part III of the installment I was unable to figure out the secret sauce Oracle is using to mark a cursor bind aware when count of the 3 buckets of a bind sensitive cursor are greater than 0. For those who want to know what bucket and count represent they can start by reading part I and part II

Thanks to a comment dropped by an anonymous reader coupled with my understanding of the adaptive cursor sharing mechanism I think I have succeeded to figure out that secret sauce. My goal is to publish it and let readers break it and invalidate it.

To make things simple I created the following function

------------------------------------------------------------------------------
-- File name:   fv_will_cs_be_bind_aware
-- Author   :   Mohamed Houri (Mohamed.Houri@gmail.com)
-- Date     :   29/08/2015
-- Purpose  :   When supplied with 3 parameters 
--                   pin_cnt_bucket_0 : count of bucket_id n°0
--                   pin_cnt_bucket_1 : count of bucket_id n°1
--                   pin_cnt_bucket_2 : count of bucket_id n°2
-- 
--              this function will return a status:
--
--              'Y' if the next execution at any bucket_id will mark the cursor bind aware          
--               
--              'N' if the next execution any bucket_id will NOT mark the cursor bind aware                
--
--------------------------------------------------------------------------------
create or replace function fv_will_cs_be_bind_aware
  (pin_cnt_bucket_0 in number
  ,pin_cnt_bucket_1 in number
  ,pin_cnt_bucket_2 in number)
return 
   varchar2
is
  lv_will_be_bind_aware 
                 varchar2(1) := 'N';
  ln_least_0_2   number      := least(pin_cnt_bucket_0,pin_cnt_bucket_2);
  ln_great_0_2   number      := greatest(pin_cnt_bucket_0,pin_cnt_bucket_2);
 
begin
 if pin_cnt_bucket_0 + pin_cnt_bucket_2 >= pin_cnt_bucket_1
  and ln_least_0_2 >= ceil ((ln_great_0_2-pin_cnt_bucket_1)/3)
  then
    return 'Y';
  else
    return 'N';
  end if;
end fv_will_cs_be_bind_aware;

If you have a cursor with a combination of 3 buckets having a count > 0 and you want to know whether the next execution will mark the cursor bind aware or not then you have just to do this:

SQL> select fv_will_cs_be_bind_aware(10,1,3) acs from dual;

ACS
---
Y

Or this

SQL> select fv_will_cs_be_bind_aware(10,1,2) acs from dual;

ACS
---
N

In its first call the function is indicating that the next cursor execution will compile a new optimal plan while the second call indicates that the existing child cursor will still be shared.

It’s now time to practice:

SQL> alter session set cursor_sharing=FORCE;

SQL> select count(1) from t_acs where n2 = 100;

  COUNT(1)
----------
       100

SQL> select
  2      child_number
  3     ,bucket_id
  4     ,count
  5  from
  6     v$sql_cs_histogram
  7  where sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           0          0          1
           0          1          0
           0          2          0

SQL> select count(1) from t_acs where n2 = 100;

  COUNT(1)
----------
       100
SQL> /

-- repeat this 7 times

SQL> select
  2      child_number
  3     ,bucket_id
  4     ,count
  5  from
  6     v$sql_cs_histogram
  7  where sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           0          0         10 --> 10 executions at bucket_id 0
           0          1          0
           0          2          0

Now change the bind variable value so that the bucket_id n°1 will be incremented

SQL> select count(1) from t_acs where n2 = 10000;

  COUNT(1)
----------
    100000

SQL> select
  2      child_number
  3     ,bucket_id
  4     ,count
  5  from
  6     v$sql_cs_histogram
  7  where sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           0          0         10
           0          1          1 --> 1 executions at bucket_id 1
           0          2          0

Now change again the bind variable value so that the bucket_id n°2 will be incremented

SQL> select count(1) from t_acs where n2 = 1000000;

  COUNT(1)
----------
   1099049

SQL> select count(1) from t_acs where n2 = 1000000;

  COUNT(1)
----------
   1099049

SQL> select
  2      child_number
  3     ,bucket_id
  4     ,count
  5 from
  6     v$sql_cs_histogram
  7  where sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           0          0         10
           0          1          1
           0          2          2 --> 2 executions at bucket_id 2

If, at this stage, you want to know whether the next execution at bucket id n°2 will mark the cursor bind aware or not then make a call to the function:

SQL> select fv_will_cs_be_bind_aware(10,1,2) acs from dual;

ACS
----
N

No, it will not and here’s below the proof:

SQL> select count(1) from t_acs where n2 = 1000000;

  COUNT(1)
----------
   1099049

SQL> select
  2      child_number
  3     ,bucket_id
  4     ,count
  5  from
  6     v$sql_cs_histogram
  7  where sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           0          0         10
           0          1          1
           0          2          3

And what about the next execution, say at bucket_id n° 0?

SQL> select fv_will_cs_be_bind_aware(10,1,3) acs from dual;

ACS
----
Y

The function is indicating that the next execution will compile a new child cursor; let’s check:

SQL> select count(1) from t_acs where n2 = 100;

  COUNT(1)
----------
       100

SQL> select
  2      child_number
  3     ,bucket_id
  4     ,count
  5  from
  6     v$sql_cs_histogram
  7  where sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           1          0          1
           1          1          0
           1          2          0
           0          0         10
           0          1          1
           0          2          3

Yes it did.

Can we assume from a single test that this function is reliable? No.

You want another example? Here it is:

SQL> -- run the following sql 19 times at bucket_id n°0
SQL> select count(1) from t_acs where n2 = 100;

SQL> -- run the same sql 6 times at bucket_id n°1
SQL> select count(1) from t_acs where n2 = 10000;

SQL> -- run the same sql 2 times at bucket_id n°2
SQL> select count(1) from t_acs where n2 = 1000000;

And here’s the resulting cursor sharing picture:

SQL> select
  2      child_number
  3     ,bucket_id
  4     ,count
  5 from
  6     v$sql_cs_histogram
  7 where  sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           0          0         19
           0          1          6
           0          2          2

Will the next execution compile a new execution plan?

SQL> select fv_will_cs_be_bind_aware(19,6,2) acs from dual;

ACS
---
N

No, it will not as proofed here below:

SQL> select count(1) from t_acs where n2 = 1000000;

  COUNT(1)
----------
   1099049

SQL> select
  2      child_number
  3     ,bucket_id
  4     ,count
  5  from
  6     v$sql_cs_histogram
  7  where  sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           0          0         19
           0          1          6
           0          2          3

And what about the next execution at the same bucket_id n°2? And the next next execution?

SQL> select fv_will_cs_be_bind_aware(19,6,3) acs from dual;

ACS
---
N

SQL> select fv_will_cs_be_bind_aware(19,6,4) acs from dual;

ACS
---
N

And the next execution?

SQL> select fv_will_cs_be_bind_aware(19,6,5) acs from dual;

ACS
----
Y

At the (bucket_id, count) situation shown below the function is indicating that the next execution will mark the cursor bind aware and compile a new execution plan:

SQL> select
  2     child_number
  3    ,bucket_id
  4    ,count
  5  from
  6    v$sql_cs_histogram
  7  where  sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           0          0         19
           0          1          6
           0          2          5

Isn’t it?

SQL> select count(1) from t_acs where n2 = 1000000;

  COUNT(1)
----------
   1099049

SQL> select
  2      child_number
  3     ,bucket_id
  4     ,count
  5  from
  6     v$sql_cs_histogram
  7  where  sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           1          0          0
           1          1          0
           1          2          1
           0          0         19
           0          1          6
           0          2          5

Yes it did as you can point it out via the apparition of the new child cursor n°1

Want another example? Here’s

SQL> select
        child_number
       ,bucket_id
       ,count
    from
        v$sql_cs_histogram
    where sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           0          0          3
           0          1          1
           0          2         11

SQL> select fv_will_cs_be_bind_aware(3,1,11) acs from dual;

ACS
---
N

SQL> select count(1) from t_acs where n2 = 10000;

  COUNT(1)
----------
    100000

SQL> select
        child_number
       ,bucket_id
       ,count
    from
        v$sql_cs_histogram
    where sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           0          0          3
           0          1          2
           0          2         11

SQL> select fv_will_cs_be_bind_aware(3,2,11) acs from dual;

ACS
---
Y

SQL> select count(1) from t_acs where n2 = 1000000;

  COUNT(1)
----------
   1099049

SQL> select
        child_number
       ,bucket_id
       ,count
    from
        v$sql_cs_histogram
    where sql_id = '7ck8k47bnqpnv';


CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           1          0          0
           1          1          0
           1          2          1
           0          0          3
           0          1          2
           0          2         11

I am sure that someone will come up with a simple situation where the function is returning a wrong result. Bear in mind that this is what I want and you’re welcome.

Footnote

If you want to break this function then here’s the model you can use (you need to have histogram on n2 column)

create table t_acs(n1  number, n2 number);

BEGIN
     for j in 1..1200150 loop
      if j = 1 then
       insert into t_acs values (j, 1);
      elsif j>1 and j<=101 then insert into t_acs values(j, 100); elsif j>101 and j<=1101 then insert into t_acs values (j, 1000); elsif j>10001 and j<= 110001 then
      insert into t_acs values(j,10000);
     else
      insert into t_acs values(j, 1000000);
     end if;
    end loop;
   commit;
END;
/ 
create index t_acs_i1 on t_acs(n2);

Update : 08/09/2015 : added a supplementary if condition to the function

September 2, 2015

CBO decision: unique or non-unique index?

Filed under: Oracle — hourim @ 5:43 pm

I have been asked to look at one of those few particular frustrating situations that only running systems can procure. It is an update of a single table using a complete set of its primary key columns in order to locate and update a unique row. This update looks like:

UPDATE T1
SET 
  {list of columns}
WHERE 
    T1_DATE    = :B9
AND T1_I_E_ID  = :B8
AND T1_TYPE    = :B7
AND DATE_TYPE  = :B6
AND T1_AG_ID   = :B5
AND T1_ACC_ID  = :B4
AND T1_SEC_ID  = :B3
AND T1_B_ID    = :B2
AND T1_FG_ID   = :B1;

The 9 columns in the above where clause represent the primary key of the T1 table. You might be surprised to know that this update didn’t used the primary key index and preferred instead a range scan of an existing 3 columns index plus a table access by index rowid to locate and update a unique row:

----------------------------------------------------------------------------
| Id  | Operation                    | Name   | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------
|   0 | UPDATE STATEMENT             |        |       |       |     1 (100)|
|   1 |  UPDATE                      | T1     |       |       |            |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1     |     1 |   122 |     1   (0)|
|   3 |    INDEX RANGE SCAN          | IDX_T1 |     1 |       |     1   (0)|
----------------------------------------------------------------------------
   2 - filter("T1_TYPE"=:B7 AND "DATE_TYPE"=:B6 AND "T1_I_E_ID"=:B8 AND 
              "T1_AG_ID"=TO_NUMBER(:B5) AND "T1_DATE"=TO_TIMESTAMP(:B9) AND 
              "T1_FG_ID"=TO_NUMBER(:B1))
   3 - access("T1_SEC_ID"=TO_NUMBER(:B3) AND "T1_B_ID"=TO_NUMBER(:B2) AND 
              "T1_ACC_ID"=TO_NUMBER(:B4)) 

The same update, when hinted with the primary key index, uses the following much desirable execution plan:

------------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Bytes | Cost (%CPU)|
------------------------------------------------------------------
|   0 | UPDATE STATEMENT   |        |     1 |   126 |     1   (0)|
|   1 |  UPDATE            | T1     |       |       |            |
|*  2 |   INDEX UNIQUE SCAN| PK_T19 |     1 |   126 |     1   (0)|
------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1_I_E_ID"=:B8 AND "T1_TYPE"=:B7 AND "DATE_TYPE"=:B6 
              AND "T1_AG_ID"=TO_NUMBER(:B5) AND "T1_ACC_ID"=TO_NUMBER(:B4) AND 
              "T1_SEC_ID"=TO_NUMBER(:B3) AND "T1_B_ID"=TO_NUMBER(:B2) AND 
              "T1_FG_ID"=TO_NUMBER(:B1) AND "T1_DATE"=TO_TIMESTAMP(:B9))

I don’t know how Oracle manage to get the same cost (Cost = 1) for two completely different indexes one with 9 columns and one(which is a subset of the first index) with only 3 columns (not necessarily of the same starting order though)?

So why Oracle has not selected the primary key unique index?

First here are below the available statistics on the primary key columns:

SQL> select
      column_name
     ,num_distinct
     ,num_nulls
     ,histogram
    from
      all_tab_col_statistics
    where
     table_name = 'T1'
   and
    column_name in ('T1_DATE'
                   ,'T1_I_E_ID'
                   ,'T1_TYPE'
                   ,'DATE_TYPE'
                   ,'T1_AG_ID'
                   ,'T1_ACC_ID'
                   ,'T1_SEC_ID'
                   ,'T1_B_ID'
                   ,'T1_FG_ID' );

COLUMN_NAME         NUM_DISTINCT  NUM_NULLS HISTOGRAM
------------------- ------------ ---------- ---------------
T1_I_E_ID              2          0 		FREQUENCY
T1_TYPE                5          0 		FREQUENCY
DATE_TYPE              5          0 		FREQUENCY
T1_AG_ID               106        0 		FREQUENCY
T1_ACC_ID             182         0 		FREQUENCY
T1_DATE               2861        0 		HEIGHT BALANCED
T1_SEC_ID             3092480     0 		NONE
T1_B_ID               1452        0 		HEIGHT BALANCED
T1_FG_ID              1           0 		FREQUENCY

And here’s the corresponding “update” 10053 trace file restricted only to the important part related to my investigations

***************************************
BASE STATISTICAL INFORMATION
***********************
Table Stats::
  Table: T1  Alias: T1
    #Rows: 387027527  #Blks:  6778908  AvgRowLen:  126.00  ChainCnt:  0.00

Index Stats::
Index: IDX_T1  Col#: 7 8 5
 LVLS: 3  #LB: 1568007  #DK: 3314835  LB/K: 1.00   DB/K: 25.00  CLUF: 84758374.00

Index: PK_T19  Col#: 1 2 3 4 5 7 8 37 6
 LVLS: 4  #LB: 4137117  #DK: 377310281  LB/K: 1.00  DB/K: 1.00  CLUF: 375821219.00

And the part of same trace file where the index choice is done

Access Path: index (UniqueScan)
    Index: PK_T19
    resc_io: 5.00  resc_cpu: 37647 ----------------------> spot this
    ix_sel: 0.000000  ix_sel_with_filters: 0.000000 
    Cost: 1.00  Resp: 1.00  Degree: 1
    ColGroup Usage:: PredCnt: 3  Matches Full:  Partial: 
    ColGroup Usage:: PredCnt: 3  Matches Full:  Partial:

Access Path: index (AllEqRange)
    Index: IDX_T1
    resc_io: 5.00  resc_cpu: 36797 ----------------------> spot this
    ix_sel: 0.000000  ix_sel_with_filters: 0.000000 
    Cost: 1.00  Resp: 1.00  Degree: 1

Best:: AccessPath: IndexRange
  Index: IDX_T1
    Cost: 1.00  Degree: 1  Resp: 1.00  Card: 0.00  Bytes: 0

Looking closely to the above trace file I didn’t find any difference in the index costing information (ix_sel_with_filters, resc_io, cost) which favours the non-unique IDX_T1 index over the PK_T19 primary key unique index except the resc_cpu value which equals 36797 for the former and 37647 for the latter index. I haven’t considered the clustering factor information because the index I wanted the CBO to use is a unique index. The two indexes have the same cost in this case. What extra information the CBO is using in this case to prefer the non-unique index over the unique one?

This issue remembered me an old otn thread in which the Original Poster says that, under the default CPU costing model, when two indexes have the same cost, Oracle will consider using the less CPU expensive index.

As far as I have a practical case of two different indexes with the same cost, I have decided to check this assumption by changing the costing model from CPU to I/O

SQL> alter session set "_optimizer_cost_model"=io;

SQL> explain plan for 
UPDATE T1
SET 
  {list of columns}
WHERE 
    T1_DATE   = :B9
AND T1_I_E_ID = :B8
AND T1_TYPE   = :B7
AND DATE_TYPE = :B6
AND T1_AG_ID  = :B5
AND T1_ACC_ID = :B4
AND T1_SEC_ID = :B3
AND T1_B_ID   = :B2
AND T1_FG_ID  = :B1;

SQL> select * from table(dbms_xplan.display);

Plan hash value: 704748203
-------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Bytes | Cost  |
-------------------------------------------------------------
|   0 | UPDATE STATEMENT   |        |     1 |   126 |     1 |
|   1 |  UPDATE            | T1     |       |       |       |
|*  2 |   INDEX UNIQUE SCAN| PK_T19 |     1 |   126 |     1 |
-------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1_I_E_ID"=:B8 AND "T1_TYPE"=:B7 AND
              "DATE_TYPE"=:B6 AND "T1_AG_ID"=TO_NUMBER(:B5) AND
              "T1_ACC_ID"=TO_NUMBER(:B4) AND "T1_SEC_ID"=TO_NUMBER(:B3) AND
              "T1_B_ID"=TO_NUMBER(:B2) AND "T1_FG_ID"=TO_NUMBER(:B1) AND
              "T1_DATE"=TO_TIMESTAMP(:B9))

Note
-----
   - cpu costing is off (consider enabling it)

Spot on. We get the desired primary key index without any help.

However changing the default costing model is not acceptable in the client PRODUCTION database. Continuing my root causes investigations I was getting the feeling that histograms are messing up this CBO index choice. This is why I decided to give it a try and get rid of histograms and analyze the corresponding 10053 trace file:

SQL> exec dbms_stats.gather_table_stats (user, 'T1', method_opt =&gt; 'for all columns size 1');


***************************************
BASE STATISTICAL INFORMATION
***********************
Table Stats::
  Table: T1  Alias: T1
    #Rows: 387029216  #Blks:  6778908  AvgRowLen:  126.00  ChainCnt:  0.00

Index Stats::
  Index: IDX_T1  Col#: 7 8 5
  LVLS: 3  #LB: 1625338  #DK: 3270443  LB/K: 1.00   DB/K: 26.00  CLUF: 87831324.00
  
  Index: PK_T19  Col#: 1 2 3 4 5 7 8 37 6
  LVLS: 4  #LB: 4335908  #DK: 395902926  LB/K: 1.00  DB/K: 1.00  CLUF: 394578898.00

Access Path: index (UniqueScan)
    Index: PK_T19
    resc_io: 5.00  resc_cpu: 37647
    ix_sel: 0.000000  ix_sel_with_filters: 0.000000 
    Cost: 1.00  Resp: 1.00  Degree: 1
  ColGroup Usage:: PredCnt: 3  Matches Full: #1  Partial:  Sel: 0.0000
  ColGroup Usage:: PredCnt: 3  Matches Full: #1  Partial:  Sel: 0.0000

  Access Path: index (AllEqRange)
    Index: IDX_T1
    resc_io: 31.00  resc_cpu: 370081
    ix_sel: 0.000000  ix_sel_with_filters: 0.000000 
    Cost: 6.20  Resp: 6.20  Degree: 1 ------&gt; spot the cost

Access Path: index (AllEqUnique)
    Index: PK_T19
    resc_io: 5.00  resc_cpu: 37647
    ix_sel: 0.000000  ix_sel_with_filters: 0.000000 
    Cost: 1.00  Resp: 1.00  Degree: 1
 One row Card: 1.000000

  Best:: AccessPath: IndexUnique
  Index: PK_T19
    Cost: 1.00  Degree: 1  Resp: 1.00  Card: 1.00  Bytes: 0

And now that the cost of accessing the non-unique index became 6 times (Cost = 6,2) greater than that of the unique index (Cost =1) Oracle preferred the primary key index without any help:

============
Plan Table
============
-------------------------------------------------+----------------------+
| Id  | Operation           | Name  | Rows  | Bytes | Cost  | Time      |
-------------------------------------------------+----------------------+
| 0   | UPDATE STATEMENT    |       |       |       |     1 |           |
| 1   |  UPDATE             | T1    |       |       |       |           |
| 2   |   INDEX UNIQUE SCAN | PK_T19|     1 |   126 |     1 |  00:00:01 |
-------------------------------------------------+-----------------------+

The final acceptable decision to solve this issue was to hint an instance of the same query to use the primary key index and attach an SQL profile to the original packaged query using the plan_hash_value of the primary key index execution plan.

Bottom Line: under the default CPU costing model, when two (or more) indexes have the same cost Oracle will prefer using the index that it is going to consume the less amount of CPU (resc_cpu). And, before jumping on collecting histograms (particularly the Height Balanced ones) by default, be informed that they do participate in the perception Oracle has on the amount of CPU the different indexes will need and ultimately on the CBO index desirability.

August 21, 2015

Cardinality Feedback: a practical case

Filed under: Oracle — hourim @ 5:36 pm

Here it is an interesting case of cardinality feedback collected from an 11.2.0.3 running system. A simple query against a single table has a perfect first execution response time with, according to the human eyes, a quite acceptable difference between Oracle cardinality estimates and actual rows as shown below:

SELECT 
   tr_id
FROM 
    t1 t1
WHERE 
     t1.t1_col_name= 'GroupID'
AND  t1.t1_col_value= '6276931'
AND EXISTS(SELECT 
               1 
            FROM  
                t1 t2
            WHERE t1.tr_id   = t2.tr_id
            AND   t2.t1_col_name= 'TrRangeOrder'
            AND   t2.t1_col_value= 'TrOrderPlace'
           );

SQL_ID  8b3tv5uh8ckfb, child number 0
-------------------------------------

Plan hash value: 1066392926
--------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                    | Starts | E-Rows | A-Rows |   A-Time   |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                         |      1 |        |      1 |00:00:00.14 |
|   1 |  NESTED LOOPS SEMI           |                         |      1 |      1 |      1 |00:00:00.14 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1                      |      1 |      1 |      6 |00:00:00.07 |
|*  3 |    INDEX RANGE SCAN          | IDX_T1_NAME_VALUE       |      1 |      1 |      6 |00:00:00.03 |
|*  4 |   TABLE ACCESS BY INDEX ROWID| T1                      |      6 |      1 |      1 |00:00:00.07 |
|*  5 |    INDEX UNIQUE SCAN         | T1_PK                   |      6 |      1 |      6 |00:00:00.07 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."T1_COL_NAME"='GroupID' AND "T1"."T1_COL_VALUE"='6276931')
   4 - filter("T2"."T1_COL_VALUE"='TrOrderPlace')
   5 - access("T1"."TR_ID"="T2"."TR_ID" AND 
               "T2"."T1_COL_NAME"='TrRangeOrder')

And here it is the second dramatic execution plan and response time due, this time, to cardinality feedback optimisation:


SQL_ID  8b3tv5uh8ckfb, child number 1
-------------------------------------
Plan hash value: 3786385867
----------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name                    | Starts | E-Rows | A-Rows |   A-Time   |
----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                         |      1 |        |      1 |00:10:40.14 |
|   1 |  NESTED LOOPS                  |                         |      1 |        |      1 |00:10:40.14 |
|   2 |   NESTED LOOPS                 |                         |      1 |      1 |    787K|00:09:31.00 |
|   3 |    SORT UNIQUE                 |                         |      1 |      1 |    787K|00:02:44.83 |
|   4 |     TABLE ACCESS BY INDEX ROWID| T1                      |      1 |      1 |    787K|00:02:41.58 |
|*  5 |      INDEX RANGE SCAN          | IDX_T1_NAME_VALUE       |      1 |      1 |    787K|00:00:36.46 |
|*  6 |    INDEX UNIQUE SCAN           | T1_PK                   |    787K|      1 |    787K|00:06:45.25 |
|*  7 |   TABLE ACCESS BY INDEX ROWID  | T1                      |    787K|      1 |      1 |00:05:00.24 |
----------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("T2"."T1_COL_NAME"='TrRangeOrder' AND "T2"."T1_COL_VALUE"='TrOrderPlace')
   6 - access("T1"."TR_ID"="T2"."TR_ID" AND "T1"."T1_COL_NAME"='GroupID')
   7 - filter("T1"."T1_COL_VALUE"='6276931')

Note
-----
   - cardinality feedback used for this statement

There is no real noticeable difference between actual and estimated rows in the first run of the query (E-Rows =1 versus A-Rows = 6) which implies a new re-optimisation. But Oracle did it and marked the child cursor n°0 candidate for a cardinality feedback:

SQL> select
       sql_id
      ,child_number
      ,use_feedback_stats
    from
      v$sql_shared_cursor
    where
      sql_id = '8b3tv5uh8ckfb';

SQL_ID        CHILD_NUMBER U
------------- ------------ -
8b3tv5uh8ckfb            0 Y

The bad news however with this Oracle decision is that we went from a quasi-instantaneous response time to a catastrophic 10 min. In the first plan the always suspicious estimated ‘’1’’ cardinality is not significantly far from actual rows (6), so why then Oracle has decided to re-optimize the first cursor? It might be “possible” that when Oracle rounds up its cardinality estimation to 1 for a cursor that has been previously monitored for cardinality feedback, it flags somewhere that this cursor is subject to a re-optimization during its next execution whatever the actual rows will be (close to 1 or not)?

Fortunately, this second execution has also been marked for re-optimisation:

SQL> select
       sql_id
      ,child_number
      ,use_feedback_stats
    from
      v$sql_shared_cursor
    where
      sql_id = '8b3tv5uh8ckfb';

SQL_ID        CHILD_NUMBER U
------------- ------------ -
8b3tv5uh8ckfb            0 Y
8b3tv5uh8ckfb            1 Y

And the third execution of the query produces the following interesting execution plan

SQL_ID  8b3tv5uh8ckfb, child number 2
-------------------------------------

Plan hash value: 1066392926
--------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                    | Starts | E-Rows | A-Rows |   A-Time   |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                         |      1 |        |      1 |00:00:00.01 |
|   1 |  NESTED LOOPS SEMI           |                         |      1 |      1 |      1 |00:00:00.01 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1                      |      1 |      6 |      6 |00:00:00.01 |
|*  3 |    INDEX RANGE SCAN          | IDX_T1_NAME_VALUE       |      1 |      6 |      6 |00:00:00.01 |
|*  4 |   TABLE ACCESS BY INDEX ROWID| T1                      |      6 |      1 |      1 |00:00:00.01 |
|*  5 |    INDEX UNIQUE SCAN         | T1_PK                   |      6 |      1 |      6 |00:00:00.01 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."T1_COL_NAME"='GroupID' AND "T1"."T1_COL_VALUE"='6276931')
   4 - filter("T2"."T1_COL_VALUE"='TrOrderPlace')
   5 - access("T1"."TR_ID"="T2"."TR_ID" AND
              "T2"."T1_COL_NAME"='TrRangeOrder')

Note
-----
   - cardinality feedback used for this statement   

Oracle is back to its first execution plan. The new estimations coincide perfectly with the actuals so that Oracle decided to stop monitoring this cursor with cardinality feedback as shown below:

SQL> select
       sql_id
      ,child_number
      ,use_feedback_stats
    from
      v$sql_shared_cursor
    where
      sql_id = '8b3tv5uh8ckfb';

SQL_ID        CHILD_NUMBER U
------------- ------------ -
8b3tv5uh8ckfb            0 Y
8b3tv5uh8ckfb            1 Y
8b3tv5uh8ckfb            2 N

Several questions come to my mind at this stage of the investigation:

  1.  What are the circumstances for which Oracle marks a cursor for cardinality feedback optimisation?
  2. How Oracle decides that E-Rows are significantly different from A-Rows and henceforth a cursor re-optimization will be done? In other words is E-Rows =1 significantly different from A-Rows =6? Or does that suspicious cardinality 1 participate in Oracle decision to re-optimize a cursor monitored with cardinality feedback?

Let’s try to answer the first question. There is only one unique table involved in this query with two conjunctive predicates. The two predicate columns have the following statistics

SQL> select
        column_name
       ,num_distinct
       ,density
       ,histogram
     from 
	    all_tab_col_statistics
     where
        table_name = 'T1'
     and
       column_name in ('T1_COL_NAME','T1_COL_VALUE');

COLUMN_NAME     NUM_DISTINCT    DENSITY HISTOGRAM
--------------- ------------ ---------- ---------------
T1_COL_NAME           103     4,9781E-09 FREQUENCY
T1_COL_VALUE      14833664   ,000993049  HEIGHT BALANCED

The presence of histograms, particularly the HEIGHT BALANCED, on these two columns participates strongly in the Oracle decision to monitor the cursor for cardinality feedback. In order to be sure of it I decided to get rid of histograms from both columns and re-query again:

SQL> select
        column_name
       ,num_distinct
       ,density
       ,histogram
     from 
	    all_tab_col_statistics
     where
        table_name = 'T1'
     and
       column_name in ('T1_COL_NAME','T1_COL_VALUE');

COLUMN_NAME     NUM_DISTINCT    DENSITY HISTOGRAM
--------------- ------------ ---------- ---------
T1_COL_NAME           103    ,009708738 NONE
T1_COL_VALUE      15477760   6,4609E-08 NONE

The new cursor is not anymore monitored with the cardinality feedback as shown below:

SQL_ID  fakc7vfbu1mam, child number 0
-------------------------------------

Plan hash value: 739349168
--------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                    | Starts | E-Rows | A-Rows |   A-Time   |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                         |      1 |        |      1 |00:02:00.68 |
|*  1 |  HASH JOIN SEMI              |                         |      1 |      6 |      1 |00:02:00.68 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1                      |      1 |      6 |      6 |00:00:00.01 |
|*  3 |    INDEX RANGE SCAN          | IDX_T1_NAME_VALUE       |      1 |      6 |      6 |00:00:00.01 |
|   4 |   TABLE ACCESS BY INDEX ROWID| T1                      |      1 |      6 |    787K|00:02:00.14 |
|*  5 |    INDEX RANGE SCAN          | IDX_T1_NAME_VALUE       |      1 |      6 |    787K|00:00:12.36 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."TR_ID"="T2"."TR_ID")
   3 - access("T1"."T1_COL_NAME"='GroupID' AND "T1"."T1_COL_VALUE"='6276931')
   5 - access("T2"."T1_COL_NAME"='TrRangeOrder' AND "T2"."T1_COL_VALUE"='TrOrderPlace')

   SQL> select
         sql_id
        ,child_number
        ,use_feedback_stats
    from
       v$sql_shared_cursor
    where
        sql_id = 'fakc7vfbu1mam';

SQL_ID        CHILD_NUMBER U
------------- ------------ -
fakc7vfbu1mam            0 N --> cursor not re-optimisable 

Without histograms on the two columns Oracle has not monitored the query for cardinality feedback. Unfortunately getting rid of histogram was not an option accepted by the client nor changing this packaged query to force the optimizer not unnesting the EXISTS subquery with its parent query as far as the later is always generating a couple of rows that will not hurt performance when filtered with the EXIST subquery. Attaching a SQL Profile has also been discarded because several copies of the same query are found in the packaged application which would have necessitated a couple of extra SQL Profiles.

The last option that remains at my hands was to collect extended statistics so that Oracle will be able to get accurate estimations and henceforth will stop using cardinality feedback

SQL> SELECT
       dbms_stats.create_extended_stats
       (ownname   => user
       ,tabname   => 'T1'
       ,extension => '(T1_COL_NAME,T1_COL_VALUE)'
      )
  FROM dual;

DBMS_STATS.CREATE_EXTENDED_STATS(
---------------------------------
SYS_STUE3EBVNLB6M1SYS3A07$LD52

SQL> begin
      dbms_stats.gather_table_stats
            (user
           ,'T1'
           ,method_opt    => 'for columns SYS_STUE3EBVNLB6M1SYS3A07$LD52 size skewonly'
           ,cascade       => true
           ,no_invalidate => false
            );
    end;
    /

SQL> select
       column_name
      ,num_distinct
      ,density
      ,histogram
    from all_tab_col_statistics
    where
        table_name = 'T1'
    and column_name in ('T1_COL_NAME','T1_COL_VALUE', 'SYS_STUE3EBVNLB6M1SYS3A07$LD52');

COLUMN_NAME                    NUM_DISTINCT    DENSITY HISTOGRAM
------------------------------ ------------ ---------- ---------------
SYS_STUE3EBVNLB6M1SYS3A07$LD52     18057216 ,000778816 HEIGHT BALANCED
T1_COL_NAME                          103    4,9781E-09 FREQUENCY
T1_COL_VALUE                     14833664   ,000993049 HEIGHT BALANCED


SQL_ID  dn6p58b9b6348, child number 0
-------------------------------------
Plan hash value: 1066392926
--------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                    | Starts | E-Rows | A-Rows |   A-Time   |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                         |      1 |        |      1 |00:00:00.01 |
|   1 |  NESTED LOOPS SEMI           |                         |      1 |      3 |      1 |00:00:00.01 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1                      |      1 |      3 |      6 |00:00:00.01 |
|*  3 |    INDEX RANGE SCAN          | IDX_T1_NAME_VALUE       |      1 |      3 |      6 |00:00:00.01 |
|*  4 |   TABLE ACCESS BY INDEX ROWID| T1                      |      6 |    832K|      1 |00:00:00.01 |
|*  5 |    INDEX UNIQUE SCAN         | T1_PK                   |      6 |      1 |      6 |00:00:00.01 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T1"."T1_COL_NAME"='GroupID' AND "T1"."T1_COL_VALUE"='6276931')
   4 - filter("T2"."T1_COL_VALUE"='TrOrderPlace')
   5 - access("T1"."TR_ID"="T2"."TR_ID" AND
              "T2"."T1_COL_NAME"='TrRangeOrder')

SQL> select
       sql_id
      ,child_number
      ,use_feedback_stats
    from
      v$sql_shared_cursor
    where
      sql_id = 'dn6p58b9b6348';
   
SQL_ID        CHILD_NUMBER U
------------- ------------ -
dn6p58b9b6348            0 N

This time, for E-Rows = 3 and A-Rows =6, Oracle decided that there is no significant difference between cardinality estimates and the actual rows so that the cursor is not anymore subject to cardinality feedback optimization.

You might have pointed out that I have forced the Extended Statistics column to have histogram. Otherwise the cardinality feedback will kicks off. In fact I have conducted several experiments to see when the cardinality feedback occurs and when not depending on the existence or the absence of the column group extension, its type of statistics and the statistics that have been gathered on the underlying two columns predicates:
cardinality feedback

 

August 12, 2015

Adaptive Cursor Sharing triggering mechanism

Filed under: cursor sharing — hourim @ 10:08 pm

Inspired by Dominic Brooks’ last post on SQL Plan Management choices, I decided to do the same work about my thoughts on Adaptive and Extended Cursor Sharing triggering mechanism:

ACS triggering diagramOnce a cursor is bind aware and subject to an eventual plan optimization at each execution keep a careful eye on the number of cursors the Extended Cursor Sharing Layer are going to produce

August 5, 2015

Flash back causing library cache: mutex X

Filed under: Oracle — hourim @ 5:28 pm

Recently one of our applications suffered from a severe performance issue. It is an application running on a database(11.2.0.4.0) used to validate a pre-production release. This performance issue has delayed the campaign test and the validation process for more than 3 days. The ASH data taken during the altered performance period shows this:

SQL> select event, count(1)
    from gv$active_session_history
    where sample_time between to_date('15072015 16:00:00', 'ddmmyyyy hh24:mi:ss')
                      and     to_date('15072015 16:30:00', 'ddmmyyyy hh24:mi:ss')
    group by event
    order by 2 desc;

EVENT                                COUNT(1)
----------------------------------- ----------
library cache: mutex X                    3928
kksfbc child completion                    655
cursor: pin S wait on X                    580
PX Deq: Slave Session Stats                278
                                           136
db file sequential read                    112
cursor: pin S                               35
null event                                  26
latch: shared pool                          15
cursor: mutex S                             13
library cache lock                          11
read by other session                       10
log file parallel write                      5
PX Deq: Signal ACK EXT                       3
os thread startup                            3
log file sync                                2
latch free                                   1
db file parallel write                       1
SQL*Net more data from client                1
enq: PS - contention                         1
cursor: mutex X                              1
direct path read                             1
control file sequential read                 1
CSS operation: action                        1

As you can notice the dominant wait event is:

EVENT                          COUNT(1)
------------------------------ -------
library cache: mutex X         3928

A library cache: mutex X wait event represents a concurrency wait event that is a part of 6 mutexes wait events

	CURSOR : pin S
	CURSOR : pin X
	CURSOR : pin S wait on X
	CURSOR : mutex S
	CURSOR : mutex X
	Library cache : mutex X

Mutexes are similar to locks except that they lock object in shared memory rather than rows in tables and indexes. Whenever a session wants to read or write into the library cache shared memory it needs to pin that object (cursor generally) and acquire a mutex on it. If another session wants simultaneously to read the same piece of memory it will try to acquire a mutex on it. This session might then wait on one of those library or cursor mutex wait event since another session has already preceded it and has still not released the latch (the mutex).

So, extended to my actual case what has been exaggerated so that a library cache: mutex X has made the database unusable?

SQL> select
       sql_id
      ,session_id
      ,in_parse
      ,in_sql_execution
    from
      gv$active_session_history
    where sample_time between to_date('15072015 16:00:00', 'ddmmyyyy hh24:mi:ss')
                      and     to_date('15072015 16:30:00', 'ddmmyyyy hh24:mi:ss')
    and event = 'library cache: mutex X'
    order by sql_id; 

SQL_ID        SQL_CHILD_NUMBER SESSION_ID IN_PARSE   IN_SQL_EXECUTION
------------- ---------------- ---------- ---------- ----------------
20f2kut7fg4g0               -1          8 Y          N
20f2kut7fg4g0               -1         20 Y          N
20f2kut7fg4g0                1         24 N          Y
20f2kut7fg4g0               -1         40 Y          N
20f2kut7fg4g0               -1         60 Y          N
20f2kut7fg4g0               -1         88 Y          N
20f2kut7fg4g0                1         89 N          Y
20f2kut7fg4g0                1         92 N          Y
20f2kut7fg4g0               -1        105 Y          N
20f2kut7fg4g0               -1        106 Y          N
20f2kut7fg4g0                1        109 N          Y
20f2kut7fg4g0               -1        124 Y          N
20f2kut7fg4g0               -1        128 Y          N
20f2kut7fg4g0               -1        143 Y          N
20f2kut7fg4g0               -1        157 Y          N
20f2kut7fg4g0                1        159 N          Y
20f2kut7fg4g0               -1        160 Y          N
20f2kut7fg4g0               -1        161 Y          N
20f2kut7fg4g0                1        172 N          Y
20f2kut7fg4g0                1        178 N          Y
20f2kut7fg4g0               -1        191 Y          N
20f2kut7fg4g0               -1        192 Y          N
20f2kut7fg4g0               -1        194 Y          N
20f2kut7fg4g0                1        209 N          Y
20f2kut7fg4g0               -1        223 Y          N
20f2kut7fg4g0                1        229 N          Y
20f2kut7fg4g0               -1        241 Y          N
20f2kut7fg4g0               -1        246 Y          N
20f2kut7fg4g0               -1        258 Y          N
20f2kut7fg4g0                1        259 N          Y
20f2kut7fg4g0                1        280 N          Y
20f2kut7fg4g0                1        294 N          Y
20f2kut7fg4g0               -1        309 Y          N
20f2kut7fg4g0               -1        310 Y          N
20f2kut7fg4g0               -1        328 Y          N
20f2kut7fg4g0                1        348 N          Y
20f2kut7fg4g0               -1        382 Y          N
20f2kut7fg4g0               -1        413 Y          N
20f2kut7fg4g0               -1        415 Y          N
20f2kut7fg4g0                1        428 N          Y
20f2kut7fg4g0               -1        449 Y          N
20f2kut7fg4g0               -1        450 Y          N
20f2kut7fg4g0               -1        462 Y          N
20f2kut7fg4g0                1        467 N          Y
20f2kut7fg4g0               -1        480 Y          N
20f2kut7fg4g0               -1        484 Y          N
20f2kut7fg4g0                1        516 N          Y
20f2kut7fg4g0               -1        533 Y          N
20f2kut7fg4g0                1        535 N          Y
20f2kut7fg4g0               -1        546 Y          N
20f2kut7fg4g0               -1        565 Y          N
20f2kut7fg4g0                1        568 N          Y
20f2kut7fg4g0               -1        584 Y          N
20f2kut7fg4g0               -1        585 Y          N
20f2kut7fg4g0               -1        601 Y          N
20f2kut7fg4g0               -1        602 Y          N
20f2kut7fg4g0               -1        615 Y          N
20f2kut7fg4g0               -1        619 Y          N
20f2kut7fg4g0               -1        635 Y          N
20f2kut7fg4g0               -1        652 Y          N
20f2kut7fg4g0               -1        667 Y          N
20f2kut7fg4g0               -1        668 Y          N
20f2kut7fg4g0               -1        687 Y          N
20f2kut7fg4g0               -1        705 Y          N
20f2kut7fg4g0               -1        717 Y          N
20f2kut7fg4g0               -1        721 Y          N
20f2kut7fg4g0               -1        733 Y          N
20f2kut7fg4g0               -1        735 Y          N
20f2kut7fg4g0               -1        753 Y          N
20f2kut7fg4g0               -1        754 Y          N
20f2kut7fg4g0                1        770 N          Y
20f2kut7fg4g0               -1        773 Y          N
20f2kut7fg4g0               -1        785 Y          N
20f2kut7fg4g0               -1        786 Y          N
20f2kut7fg4g0               -1        804 Y          N

75 rows selected.

I have limited the output to just one sql_id (20f2kut7fg4g0) in order to keep the explanation clear and simple.

What does represent this particular sql_id which is executed by 75 different sessions that are sometimes in parse and sometimes in execution?

SQL> with got_my_sql_id
    as ( select sql_id, count(1)
    from gv$active_session_history
    where sample_time between to_date('16072015 09:30:00', 'ddmmyyyy hh24:mi:ss')
                      and     to_date('16072015 10:30:00', 'ddmmyyyy hh24:mi:ss')
    and event  = 'library cache: mutex X'
    group by sql_id)
    select distinct sql_id, sql_text
    from v$sql b
   where exists (select null
                 from got_my_sql_id a
                 where a.sql_id = b.sql_id)
   order by sql_id; 

SQL_ID        SQL_TEXT
------------ ----------------------------------------------------------------------
20f2kut7fg4g0 /* Flashback Table */ INSERT /*+ PARALLEL(S, DEFAULT) PARALLEL(T,
                 DEFAULT) */ INTO "DEV_ZXX"."CLOSED_DAY" SELECT
              /*+ USE_NL(S) ORDERED PARALLEL(S, DEFAULT) PARALLEL(T, DEFAULT) */
              S.* FROM SYS_TEMP_FBT T , "DEV_ZXX"."CLOSED_DAY"
              as of SCN :1 S WHERE T.rid = S.
              rowid and T.action = 'I' and T.object# = :2

The above piece of SQL code is generated by Oracle behind the scene when flashing backward a content of a given table. And this is exactly what this client was doing. At the end of their pre-production test campaign they flash back a certain number of tables to the data they have at the beginning of the test. And since the generated code uses parallel run with default degree it has produce such a kind of monitored execution plan


Parallel Execution Details (DOP=96 , Servers Allocated=96) 

SQL Plan Monitoring Details (Plan Hash Value=4258977226)
===============================================================================
| Id |        Operation        |      Name       |  Rows   | Execs |   Rows   |
|    |                         |                 | (Estim) |       | (Actual) |
===============================================================================
|  0 | INSERT STATEMENT        |                 |         |     1 |          |
|  1 |   LOAD AS SELECT        |                 |         |     1 |          |
|  2 |    PX COORDINATOR       |                 |         |    82 |          |
|  3 |     PX SEND QC (RANDOM) | :TQ10000        |     322 |    81 |          |
|  4 |      PX BLOCK ITERATOR  |                 |     322 |    81 |          |
|  5 |       TABLE ACCESS FULL | TABLE_RULE_SUP  |     322 |     4 |          |
===============================================================================

For every flashed back table Oracle started 96 parallel servers (96 sessions) in order to do a simple insert statement causing the observed library cache mutex X wait event. The DOP 96 is the maximum DOP which represents in fact the default DOP determined by the following simplified formula:

DOP = PARALLEL_THREADS_PER_CPU x CPU_COUNT
DOP = 2 x 48 = 96
SQL> show parameter parallel

NAME                               TYPE        VALUE
---------------------------------- ----------- ----------
fast_start_parallel_rollback       string      LOW
parallel_adaptive_multi_user       boolean     TRUE
parallel_automatic_tuning          boolean     FALSE
parallel_degree_limit              string      CPU
parallel_degree_policy             string      MANUAL
parallel_execution_message_size    integer     16384
parallel_force_local               boolean     FALSE
parallel_instance_group            string
parallel_io_cap_enabled            boolean     FALSE
parallel_max_servers               integer     100
parallel_min_percent               integer     0
parallel_min_servers               integer     0
parallel_min_time_threshold        string      AUTO
parallel_server                    boolean     FALSE
parallel_server_instances          integer     1
parallel_servers_target            integer     100
_parallel_syspls_obey_force        boolean     TRUE
parallel_threads_per_cpu           integer     2
recovery_parallelism               integer     0
SQL> show parameter cpu

NAME                           TYPE        VALUE
------------------------------ ----------- -----
cpu_count                       integer     48
parallel_threads_per_cpu        integer     2
resource_manager_cpu_allocation integer     48

Having no possibility to hint the internal flash back Oracle code so that it will not execute in parallel, all what I have been left with is to pre-empt Oracle from starting a huge number of parallel process by limiting the parallel_max_servers parameter to 8 and, and as such, the maximum DOP will be limited to 8 whatever the cpu_count is.

Once this done I observed the following new situation for one flashed back sql_id (a5u912v53t11t)


Global Information
------------------------------
 Status              :  DONE
 Instance ID         :  1
 Session             :  XXXXX (172:40851)
 SQL ID              :  a5u912v53t11t
 SQL Execution ID    :  16777236
 Execution Started   :  07/16/2015 11:21:55
 First Refresh Time  :  07/16/2015 11:21:55
 Last Refresh Time   :  07/16/2015 11:21:55
 Duration            :  .011388s
 Module/Action       :  JDBC Thin Client/-
 Service             :  SYS$USERS
 Program             :  JDBC Thin Client
 DOP Downgrade       :  92%                       

Global Stats
=======================================================
| Elapsed |   Cpu   | Concurrency |  Other   | Buffer |
| Time(s) | Time(s) |  Waits(s)   | Waits(s) |  Gets  |
=======================================================
|    0.05 |    0.00 |        0.04 |     0.00 |     19 |
=======================================================

Parallel Execution Details (DOP=8 , Servers Requested=96 , Servers Allocated=8)
==============================================================================
|      Name      | Type  | Server# | Elapsed |   Cpu   | Concurrency |Buffer |
|                |       |         | Time(s) | Time(s) |  Waits(s)   | Gets  |
==============================================================================
| PX Coordinator | QC    |         |    0.00 |    0.00 |             |     4 |
| p000           | Set 1 |       1 |    0.01 |         |        0.01 |     3 |
| p001           | Set 1 |       2 |    0.01 |         |        0.01 |     3 |
| p002           | Set 1 |       3 |    0.00 |         |        0.00 |     3 |
| p003           | Set 1 |       4 |    0.00 |         |        0.00 |     3 |
| p004           | Set 1 |       5 |    0.00 |    0.00 |        0.00 |     3 |
| p005           | Set 1 |       6 |    0.01 |         |        0.01 |       |
| p006           | Set 1 |       7 |    0.00 |         |        0.00 |       |
| p007           | Set 1 |       8 |    0.01 |         |        0.01 |       |
==============================================================================

SQL Plan Monitoring Details (Plan Hash Value=96405358)
============================================================
| Id |        Operation        |   Name   |  Rows   | Cost |
|    |                         |          | (Estim) |      |
============================================================
|  0 | INSERT STATEMENT        |          |         |      |
|  1 |   LOAD AS SELECT        |          |         |      |
|  2 |    PX COORDINATOR       |          |         |      |
|  3 |     PX SEND QC (RANDOM) | :TQ10000 |     409 |    2 |
|  4 |      PX BLOCK ITERATOR  |          |     409 |    2 |
|  5 |       TABLE ACCESS FULL | TABLE_CLS|     409 |    2 |
============================================================

Notice how Oracle has serviced the insert statement with 8 parallel servers instead of the requested 96 servers. This is a clear demonstration of how to bound the default DOP

Parallel Execution Details (DOP=8, Servers Requested=96, Servers Allocated=8)

Unfortunately, despite this implicit parallel run limitation, the application was still suffering from the same library cache symptoms (less than before thought) as shown below:

SQL> select event, count(1)
    from gv$active_session_history
    where sample_time between to_date('16072015 10:57:00', 'ddmmyyyy hh24:mi:ss')
                      and     to_date('16072015 11:30:00', 'ddmmyyyy hh24:mi:ss')
    group by event
    order by 2 desc;

EVENT                                  COUNT(1)
------------------------------------ ----------
library cache: mutex X                      518
                                            382
db file sequential read                     269
read by other session                        42
kksfbc child completion                      37
null event                                   31
log file parallel write                      18
cursor: pin S wait on X                      12
latch: shared pool                            7
cursor: pin S                                 7
log file sync                                 5
latch free                                    5
enq: RO - fast object reuse                   3
SQL*Net more data from client                 2
db file parallel write                        2
enq: CR - block range reuse ckpt              1
os thread startup                             1
SQL> select sql_id, session_id,in_parse, in_sql_execution
    from gv$active_session_history
    where sample_time between to_date('16072015 10:57:00', 'ddmmyyyy hh24:mi:ss'
                      and     to_date('16072015 11:30:00', 'ddmmyyyy hh24:mi:ss'
    and event = 'library cache: mutex X'
    order by sql_id;

SQL_ID        SESSION_ID IN_PARSE IN_SQL_EXEC
------------- ---------- -------- -----------
a5u912v53t11t        516 Y 	 N
a5u912v53t11t        494 Y 	 N
a5u912v53t11t        343 Y 	 N
a5u912v53t11t        482 Y 	 N

Finally we agreed with the client to disable parallelism (by setting the parallel_max_servers parameter value to 1) so that the flash back treatment will go serially:

SQL> show parameter parallel_max_servers

NAME                           TYPE        VALUE
------------------------------ ----------- -----
parallel_max_servers           integer     1

Once this has been done the test campaign finally started to perform very quickly with the following picture from ASH:

SQL> select event, count(1)
    from gv$active_session_history
    where sample_time between to_date('16072015 14:15:00', 'ddmmyyyy hh24:mi:ss')
                      and     to_date('16072015 15:30:00', 'ddmmyyyy hh24:mi:ss')
    group by event
    order by 2 desc;

EVENT                                  COUNT(1)
------------------------------------- ----------
                                             966
db file sequential read                      375
db file scattered read                        49
log file parallel write                       46
log file sync                                 22
db file parallel write                        13
null event                                     8
local write wait                               7
SQL*Net more data from client                  5
os thread startup                              3
reliable message                               3
enq: PS - contention                           3
enq: RO - fast object reuse                    3
cursor: pin S wait on X                        1
direct path read                               1
Disk file operations I/O                       1
enq: CR - block range reuse ckpt               1
enq: TX - row lock contention                  1

The flashed back treatment ceases completely from being run in parallel and the campaign test started again to perform quickly.

This is not an invitation to go with drastic and brutal workaround to reduce the effect of many sessions waked up due to a very high degree of parallelism itself due to the default maximum DOP. It represents a demonstration on

  • how a high degree of parallelism can affect the locking in the library cache
  • how the parallel_max_servers parameter can bound the DOP of your query

August 4, 2015

Degree of Parallelism is 16 because of table property

Filed under: Oracle — hourim @ 10:11 am

I have been pleasantly surprised by the following Note at the bottom of an execution plan coming from a 12.1.0.2.0 Oracle instance


SQL> select * from v$version;

BANNER                                                                               CON_ID
-------------------------------------------------------------------------------- ----------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production              0
PL/SQL Release 12.1.0.2.0 - Production                                                    0
CORE    12.1.0.2.0      Production                                                        0
TNS for Linux: Version 12.1.0.2.0 - Production                                            0
NLSRTL Version 12.1.0.2.0 - Production                                                    0


SQL> create table t_par as select rownum n1, trunc((rownum -1/3)) n2, mod(rownum, 5) n3
    from dual
    connect by level<=1e6;
  
SQL> create index t_part_idx on t_par(n1);

Index created.
 
SQL> alter table t_par parallel 16;

Table altered.  

SQL> select count(1) from t_par where n1> 1;

  COUNT(1)
----------
    999999

SQL> select * from table(dbms_xplan.display_cursor);
----------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |          |       |       |    48 (100)|          |        |      |            |
|   1 |  SORT AGGREGATE        |          |     1 |     5 |            |          |        |      |            |
|   2 |   PX COORDINATOR       |          |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM) | :TQ10000 |     1 |     5 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE     |          |     1 |     5 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR |          |   999K|  4882K|    48   (3)| 00:00:01 |  Q1,00 | PCWC |            |
|*  6 |       TABLE ACCESS FULL| T_PAR    |   999K|  4882K|    48   (3)| 00:00:01 |  Q1,00 | PCWP |            |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   6 - access(:Z>=:Z AND :Z<=:Z)
       filter("N1">1)

Note
-----
   - Degree of Parallelism is 16 because of table property

As you can point it out, thanks to the above Note, we can immediately guess that the Optimizer decided to run the query in parallel because the T_PAR table has been decorated with a DOP of 16

SQL> select table_name,degree
  2  from user_tables
  3  where table_name = 'T_PAR';

TABLE_NAME    DEGREE
------------ -------
T_PAR        16 

A nice 12c add.

A couple of month ago a query running on 11.2.0.3 which used to run very quickly suddenly started deviating dangerously from its habitual execution time. The end user told me that they didn’t changed anything and he asked to investigate the root cause of this performance degradation. The corresponding SQL real time monitoring looks like:

Global Stats
======================================================================================
| Elapsed |   Cpu   |    IO    | Concurrency | Buffer | Read | Read  | Write | Write |
| Time(s) | Time(s) | Waits(s) |  Waits(s)   |  Gets  | Reqs | Bytes | Reqs  | Bytes |
======================================================================================
|     799 |     443 |      356 |        0.01 |     3M | 398K |  11GB |  122K |  24GB |
======================================================================================

Parallel Execution Details (DOP=4 , Servers Allocated=8)
SQL Plan Monitoring Details (Plan Hash Value=637438362)
========================================================================================================================
| Id    |                 Operation                  |       Name        |  Rows   |Execs |   Rows   | Temp | Activity |
|       |                                            |                   | (Estim) |      | (Actual) |      |   (%)    |
========================================================================================================================
|     0 | SELECT STATEMENT                           |                   |         |    9 |        0 |      |     0.13 |
|     1 |   PX COORDINATOR                           |                   |         |    9 |          |      |          |
|     2 |    PX SEND QC (RANDOM)                     | :TQ10003          |     19M |    4 |          |      |          |
|     3 |     HASH JOIN RIGHT SEMI                   |                   |     19M |    4 |        0 |      |          |
|     4 |      PX RECEIVE                            |                   |      3M |    4 |     1853 |      |          |
|     5 |       PX SEND HASH                         | :TQ10002          |      3M |    4 |     1853 |      |          |
|     6 |        VIEW                                | VW_NSO_1          |      3M |    4 |     1853 |      |          |
|     7 |         FILTER                             |                   |         |    4 |     1853 |      |          |
|     8 |          NESTED LOOPS                      |                   |      3M |    4 |     1853 |      |          |
|     9 |           BUFFER SORT                      |                   |         |    4 |       38 |      |          |
|    10 |            PX RECEIVE                      |                   |         |    4 |       38 |      |          |
|    11 |             PX SEND ROUND-ROBIN            | :TQ10000          |         |    1 |       38 |      |          |
|    12 |              HASH JOIN                     |                   |   69556 |    1 |       38 |      |          |
|    13 |               INLIST ITERATOR              |                   |         |    1 |     6258 |      |          |
|    14 |                TABLE ACCESS BY INDEX ROWID | TAB_001X          |   69556 |  840 |     6258 |      |          |
|    15 |                 INDEX RANGE SCAN           | IDX_TAB_001X25    |   69556 |  840 |     6258 |      |          |
|    16 |               INDEX FAST FULL SCAN         | PK_TAB_00X13      |     18M |    1 |      19M |      |     0.27 |
|    17 |           INDEX RANGE SCAN                 | PK_IDX_MAIN_TAB   |      36 |   38 |     1853 |      |          |
| -> 18 |      BUFFER SORT                           |                   |         |    4 |        0 |  26G |    34.18 |
| -> 19 |       PX RECEIVE                           |                   |    648M |    4 |     566M |      |     4.14 |
| -> 20 |        PX SEND HASH                        | :TQ10001          |    648M |    1 |     566M |      |    13.89 |
| -> 21 |         TABLE ACCESS FULL                  | MAIN_TABLE_001    |    648M |    1 |     566M |      |    47.40 |
========================================================================================================================

The BUFFER SORT operation at line 18 was killing the performance of this query as far as it was buffering 566M of rows.

Looking back to the previous execution plans shows that they were serial plans!!! What makes this new plan running in parallel? I was practically sure from where this was coming. I know that this application rebuilds indexes from time to time. And I know that very often, they use parallel rebuild to accelerate the operation. But I know also that very often, DBA forget to set back the indexes at their default value at the end of the index rebuild process. Indeed the primary index PK_IDX_MAIN_TAB was at a DOP of 4 while it shouldn’t. Putting back this index to degree 1 sets back the corresponding execution plan to the serial execution plan the underlying query used to follow in the past:

Global Stats
=================================================
| Elapsed |   Cpu   |  Other   | Fetch | Buffer |
| Time(s) | Time(s) | Waits(s) | Calls |  Gets  |
=================================================
|      43 |      43 |     0.02 |    11 |     4M |
=================================================

SQL Plan Monitoring Details (Plan Hash Value=1734192894)
============================================================================================
| Id |              Operation              |       Name        |  Rows   |Execs |   Rows   |
|    |                                     |                   | (Estim) |      | (Actual) |
============================================================================================
|  0 | SELECT STATEMENT                    |                   |         |    1 |      108 |
|  1 |   HASH JOIN RIGHT SEMI              |                   |     19M |    1 |      108 |
|  2 |    VIEW                             | VW_NSO_1          |    701K |    1 |      108 |
|  3 |     FILTER                          |                   |         |    1 |      108 |
|  4 |      NESTED LOOPS                   |                   |    701K |    1 |      108 |
|  5 |       HASH JOIN                     |                   |   19387 |    1 |        3 |
|  6 |        INLIST ITERATOR              |                   |         |    1 |        3 |
|  7 |         TABLE ACCESS BY INDEX ROWID | TAB_001X          |   19387 |  168 |        3 |
|  8 |          INDEX RANGE SCAN           | IDX_TAB_001X25    |   19387 |  168 |        3 |
|  9 |        INDEX FAST FULL SCAN         | PK_TAB_00X13      |     18M |    1 |      19M |
| 10 |       INDEX RANGE SCAN              | PK_IDX_MAIN_TAB   |      36 |    3 |      108 |
| 11 |    TABLE ACCESS FULL                | MAIN_TABLE_001    |    648M |    1 |     677M |
============================================================================================ 

In this context of rebuild indexes left at a DOP > 1 and this nicely 12c added Note about the reason for which Oracle has decided to use parallel run, I was curious to know if the 12c Note will show the same information if the parallel plan was due to an index having a DOP > 1

SQL> alter table t_par noparallel;

SQL> alter index T_PART_IDX parallel 16;

SQL> select count(1) from t_par where n1> 1;

  COUNT(1)
----------
    999999

SQL> select * from table(dbms_xplan.display_cursor);

SQL_ID  4s7n5z52gun33, child number 0
-------------------------------------
---------------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name       | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |            |       |       |   610 (100)|          |        |      |            |
|   1 |  SORT AGGREGATE           |            |     1 |     5 |            |          |        |      |            |
|   2 |   PX COORDINATOR          |            |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)    | :TQ10000   |     1 |     5 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE        |            |     1 |     5 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR    |            |   999K|  4882K|   610   (1)| 00:00:01 |  Q1,00 | PCWC |            |
|*  6 |       INDEX FAST FULL SCAN| T_PART_IDX |   999K|  4882K|   610   (1)| 00:00:01 |  Q1,00 | PCWP |            |
---------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   6 - access(:Z>=:Z AND :Z<=:Z)
       filter("N1">1)

Unfortunately there is no Note indicating that the above parallel execution plan is due to the parallel degree of the index T_PART_IDX.

July 9, 2015

Stressed ASH

Filed under: Oracle — hourim @ 5:29 pm

It is well known that any record found in dba_hist_active_session_history has inevitably been routed there from v$active_session_history. If so, then how could we interpret the following cut & past from a running production system?

ASH first

SQL> select event, count(1)
    from gv$active_session_history
    where sample_time between to_date('06072015 18:30:00', 'ddmmyyyy hh24:mi:ss')
                      and     to_date('06072015 19:30:00', 'ddmmyyyy hh24:mi:ss')
    group by event
    order by 2 desc;

EVENT                                                              COUNT(1)
---------------------------------------------------------------- ----------
                                                                        372
direct path read                                                        185
log file parallel write                                                  94
Disk file Mirror Read                                                    22
control file sequential read                                             20
control file parallel write                                              18
direct path write temp                                                   16
Streams AQ: qmn coordinator waiting for slave to start                   12
db file parallel read                                                    11
gc cr multi block request                                                 6
enq: KO - fast object checkpoint                                          4
db file sequential read                                                   3
ges inquiry response                                                      3
os thread startup                                                         2
PX Deq: Signal ACK RSG                                                    2
enq: CF - contention                                                      1
PX Deq: Slave Session Stats                                               1
Disk file operations I/O                                                  1
IPC send completion sync                                                  1
reliable message                                                          1
null event                                                                1
enq: CO - master slave det                                                1
db file parallel write                                                    1
gc current block 2-way                                                    1

AWR next

SQL> select event, count(1)
    from dba_hist_active_sess_history
    where sample_time between to_date('06072015 18:30:00', 'ddmmyyyy hh24:mi:ss')
                      and     to_date('06072015 19:30:00', 'ddmmyyyy hh24:mi:ss')
    group by event
    order by 2 desc;

EVENT                                                              COUNT(1)
---------------------------------------------------------------- ----------
SQL*Net break/reset to client                                         12950
enq: TM - contention                                                  12712
                                                                        624
db file sequential read                                                 386
enq: TX - row lock contention                                           259
SQL*Net message from dblink                                              74
direct path read                                                         62
SQL*Net more data from dblink                                            27
log file parallel write                                                  26
log file sync                                                            15
SQL*Net more data from client                                             9
control file sequential read                                              7
Disk file Mirror Read                                                     6
gc cr grant 2-way                                                         5
db file parallel write                                                    4
read by other session                                                     3
control file parallel write                                               3
Streams AQ: qmn coordinator waiting for slave to start                    3
log file sequential read                                                  2
direct path read temp                                                     2
enq: KO - fast object checkpoint                                          2
gc cr multi block request                                                 1
CSS initialization                                                        1
gc current block 2-way                                                    1
reliable message                                                          1
db file parallel read                                                     1
gc buffer busy acquire                                                    1
ges inquiry response                                                      1
direct path write temp                                                    1
rdbms ipc message                                                         1
os thread startup                                                         1

12,950 snapshots of SQL*Net break/reset to client and 12,712 snapshots of an enq: TM – contention wait events in AWR not found in ASH. How can we interpret this situation?

This 11.2.0.4.0 database is implemented under a RAC infrastructure with 2 instances. Let’s look at the ASH of the two instances separately

Instance 1 first

SQL> select event, count(1)
    from v$active_session_history
    where sample_time between to_date('06072015 18:30:00', 'ddmmyyyy hh24:mi:ss')
                      and     to_date('06072015 19:30:00', 'ddmmyyyy hh24:mi:ss')
    group by event
    order by 2 desc;

 no rows selected

Instance 2 next

SQL> select event, count(1)
    from v$active_session_history
    where sample_time between to_date('06072015 18:30:00', 'ddmmyyyy hh24:mi:ss')
                      and     to_date('06072015 19:30:00', 'ddmmyyyy hh24:mi:ss')
    group by event
    order by 2 desc;

EVENT                                                              COUNT(1)
---------------------------------------------------------------- ----------
                                                                        372
direct path read                                                        185
log file parallel write                                                  94
Disk file Mirror Read                                                    22
control file sequential read                                             20
control file parallel write                                              18
direct path write temp                                                   16
Streams AQ: qmn coordinator waiting for slave to start                   12
db file parallel read                                                    11
gc cr multi block request                                                 6
enq: KO - fast object checkpoint                                          4
db file sequential read                                                   3
ges inquiry response                                                      3
os thread startup                                                         2
PX Deq: Signal ACK RSG                                                    2
enq: CF - contention                                                      1
PX Deq: Slave Session Stats                                               1
Disk file operations I/O                                                  1
IPC send completion sync                                                  1
reliable message                                                          1
null event                                                                1
enq: CO - master slave det                                                1
db file parallel write                                                    1
gc current block 2-way                                                    1

All what is sampled in ASH in that specific time interval is coming from the second instance while the first instance doesn’t report any record for the corresponding time interval. This inevitably questions either the ash size of instance one or an imbalanced workload between the two instances:

ASH size first

SQL> select
  2        inst_id
  3        ,total_size
  4      from gv$ash_info;

   INST_ID TOTAL_SIZE
---------- ----------
         1  100663296
         2  100663296

ASH Activity next

SQL> select
        inst_id
       ,total_size
       ,awr_flush_emergency_count
     from gv$ash_info;

   INST_ID TOTAL_SIZE AWR_FLUSH_EMERGENCY_COUNT
---------- ---------- -------------------------
         1  100663296                       136
         2  100663296                         0

Typically the activity is mainly oriented towards instance 1 and the abnormal and unusual 12,712 SQL*Net break/reset to client wait events have exacerbated the rate of insert into ASH buffers of instance one generating the 136 awr_flush_emergency_count and, as such, the discrepancies between ASH and AWR.

This is also confirmed by the difference in the ASH retention period between the two instances

Instance 1 first where only 3 hours of ASH data are kept

SQL> select min(sample_time), max(sample_time)
  2  from v$active_session_history;

MIN(SAMPLE_TIME)                         MAX(SAMPLE_TIME)
---------------------------------------  -------------------------
08-JUL-15 05.51.20.502 AM                08-JUL-15 08.35.48.233 AM

Instance 2 next where several days worth of ASH data are still present

SQL> select min(sample_time), max(sample_time)
  2  from v$active_session_history;

MIN(SAMPLE_TIME)                         MAX(SAMPLE_TIME)
---------------------------------------  -------------------------
25-JUN-15 20.01.43                       08-JUL-15 08.37.17.233 AM

The solution would be one of the following points (I think in the order of priority):

  • Solve this SQL*Net break/reset to client issue which is dramatically filling up the ash buffer causing unexpected rapid flush of important and more precise data
  • Balance the work load activity between the two instances
  • Increase the ash size of the instance 1 by means of alter system set “_ash_size”=25165824;

In the next article I will explain how I have identified what is causing this unusual SQL*Net break/reset to client wait events.

« Previous PageNext Page »

Blog at WordPress.com.

Tony's Oracle Tips

Tony Hasler's light hearted approach to learning about Oracle

Richard Foote's Oracle Blog

Focusing Specifically On Oracle Indexes, Database Administration and Some Great Music

Hatem Mahmoud Oracle's blog

Just another Oracle blog : Database topics and techniques

Mohamed Houri’s Oracle Notes

Qui se conçoit bien s’énonce clairement

Oracle Diagnostician

Performance troubleshooting as exact science

Raheel's Blog

Things I have learnt as Oracle DBA

Coskan's Approach to Oracle

What I learned about Oracle

So Many Oracle Manuals, So Little Time

“Books to the ceiling, Books to the sky, My pile of books is a mile high. How I love them! How I need them! I'll have a long beard by the time I read them”—Lobel, Arnold. Whiskers and Rhymes. William Morrow & Co, 1988.

EU Careers info

Your career in the European Union

Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Oracle Scratchpad

Just another Oracle weblog

OraStory

Dominic Brooks on Oracle Performance, Tuning, Data Quality & Sensible Design ... (Now with added Sets Appeal)