Mohamed Houri’s Oracle Notes

July 16, 2016

SQL Server 2016 : parallel DML

Filed under: Oracle — hourim @ 2:16 pm

In contrast to Oracle where parallel DML is possible since a long time in SQL Server it is only until the 2016 last release where a parallel insert-select has been made possible. This article aims to illustrate this parallel operation in SQL Server and show how the resemblance with Oracle is very accurate.

I am going first to create the source and the target tables necessary for the demonstration

drop table t1;
drop table t2;

create table t1 (id   INT,
                 Nom  VARCHAR(100),
  	             Prenom  VARCHAR(100),
		        Ville   VARCHAR(100)
		 );

insert into t1(id, Nom, Prenom, Ville)
select TOP 1000000 ROW_NUMBER() over (order by a.name) ROWID,
                   'Bruno',
		   CASE WHEN ROW_NUMBER() over (order by a.name)%2 =1 THEN 'Smith'
		   ELSE 'Mohamed' END,
		   CASE WHEN ROW_NUMBER() over (order by a.name)%10 =1 THEN 'Paris'
		   WHEN ROW_NUMBER() over (order by a.name)%10 =5 THEN 'Lille'
		   WHEN ROW_NUMBER() over (order by a.name)%10 =3 THEN 'Marseille'
		   ELSE 'Londres' END
FROM sys.all_objects a
cross join sys.all_objects b;

create table t2 (id     INT,
                 Nom    VARCHAR(100),
	            Prenom  VARCHAR(100),
	      	    Ville   VARCHAR(100)
		 );

I have created table t1 with 1,000,000 rows and have cloned it into an empty t2 table. I am going below to insert the content of t1 into t2 table hoping that this insert will operate in parallel.

Here’s below the SQL Server version I am using:

select @@version;

Microsoft SQL Server 2016 (RTM) - 13.0.1601.5 (X64)
Copyright (c) Microsoft Corporation  Developer Edition

One of the parallel DML pre-requisites in SQL Server is the compatibility_level parameter which should have a value set to 130. Let’s verify this parameter value before trying the insert/select operation:

select name, compatibility_level from sys.databases where name = 'master';

name	compatibility_level
master	130

If the value of this parameter is not equal to 130 you can set it using the following command (I don’t know the impact of this change on your application so don’t change it without measuring its possible side effects ):

 alter database master set compatibility_level = 130;

Finally I am now ready to launch the insert/select operation and gather its corresponding execution plan:

insert into t2 WITH (TABLOCK)
select * from t1;

parallel DML3

Notice that I have added a WITH (TABLOCK) clause to this insert so that it can operates in parallel. In close comparison with Oracle direct path load where a lock is automatically acquired on the inserted table (until the transaction is committed) it seems that in SQL Server we need to explicitly acquire this lock with the (TABLOCK) clause.

You might ask how I have figured out that the insert/select operation has been done in parallel. In fact the two yellow arrows in the above execution plan indicate the parallel nature of the execution. However the two yellow arrows do not indicate that the DOP (Degree Of Parallelism) is 2. If you want to know the actual DOP used by this operation you have to hover over the first operation (towards the left : INSERT) to see a tooltip showing that degree of parallelism if 4. However I still have not found a convenient way to capture a mouse tooltip using greenshot. When dealing with SQL Server execution plan I very often prefer the free version of the SQL Sentry plan explorer from which I have captured the following part of the above execution plan where you can see that the number of executions (Actual Executions or Starts in Oracle terms) equals 4 indicating that the DOP is 4:

Sentry Plan Explorer 3 - parallel_dml.sqlplan

Notice by the way that in contrast to the graphical execution plan where the INSERT operation doesn’t seem to be done in parallel the SQL Sentry Plan explorer is clearly indicating that the insert has been operated in parallel.

If you know how parallel process is handled in Oracle you will certainly not be disappointed when you start dealing with parallel processing in SQL Server. Almost all the parallel concepts are identical. That is:

  • The maximum number of 2 concurrent DFO  active per DFO tree
  • The number of parallel servers (thread) which is 2 * DOP
  • The different data parallel distribution between servers
  • etc…

I have mentioned above that, one of the pre-requisites for a parallel insert/select operation to be successfully accomplished is the explicit lock of the inserted table. This has paved for me the way to check whether the Oracle direct path impeaching reasons: triggers and foreign keys can also restrict the parallel insert in SQL Server. Here’s then the demo; first with a trigger and second with a foreign key implemented on the t2 table:

CREATE TRIGGER t2_A_Ins_ ON  t2
FOR INSERT
AS
begin
    declare @Firstname nvarchar(50)
    set @Firstname = 'Unknown'
END;
truncate table t2;

insert into t2 WITH (TABLOCK)
select * from t1;

parallel DML trigger
As you can point it out the insert trigger annihilates the parallel insert operations in SQL Server.

drop trigger master.t2_A_Ins;
ALTER TABLE t1 ADD CONSTRAINT t1_pk
UNIQUE (id);

ALTER TABLE t2 ADD CONSTRAINT t2_t1_fk
FOREIGN KEY (id) references t1(id);

And now an insert on table t2 having a foreign key

truncate table t2;

insert into t2 WITH (TABLOCK)
select * from t1;

Plan with FK

Again the existence of a Foreign key on the t2 table pre-empted a parallel insert/select operation.
Funny enough the resemblance with Oracle.

Before pushing the “Publish” button I have played again with the model and from time to time I was unable to reproduce exactly the conclusions made in this article about the parallel insert impeachment reasons. I will certainly be back to this article when my investigations will be finished

June 24, 2016

Index Skip Scan : how many columns are skippable?

Filed under: Index — hourim @ 10:49 am

If you want to know what an index skip scan operation is then Richard Foote article and Jonathan Lewis one you can find here and here respectively are two reliable references.

This article deals about a particular query using an index skip scan with the following Real Time SQL Monitoring (RTSM) report

Global Information
------------------------------
 Status              :  EXECUTING  
 Instance ID         :  1                   
 Session             :  C##MHOURI (965:52281)  
 SQL ID              :  c2mah5hxau88v       
 SQL Execution ID    :  16777216            
 Execution Started   :  06/15/2016 13:03:15 
 First Refresh Time  :  06/15/2016 13:03:21 
 Last Refresh Time   :  06/15/2016 14:49:32 
 Duration            :  6378s                  
 Module/Action       :  SQL*Plus/-          
 Service             :  orcl                
 Program             :  sqlplus.exe         
 Fetch Calls         :  2280           


Global Stats
================================================================
| Elapsed |   Cpu   |    IO    | Fetch | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) | Calls |  Gets  | Reqs | Bytes |
================================================================
|    6385 |    193 |     6192 |   2280 |   2M   | 2M   |  14GB |
================================================================
  
SQL Plan Monitoring Details (Plan Hash Value=517922901)
======================================================================================
| Id |               Operation               | Name  |   Time    | Start  | Activity |
|    |                                       |       | Active(s) | Active |   (%)    |
======================================================================================
|  0 | SELECT STATEMENT                      |       |      6297 |     +67|          |
|  1 |   TABLE ACCESS BY INDEX ROWID BATCHED | T     |      6297 |     +67|      0.52|
|->2 |    INDEX SKIP SCAN                    | T_IDX |      6378 |     +2 |     99.48|
======================================================================================

As you can see after 6385 seconds the query is still running. The Start Active column when correlated with the Time Active one indicates that 99% of this time is spent on the index skip scan operation. We all know that an index skip scan is an indicator of a missing index or a pointer to a wrongly designed one but, what has retained my attention in this case is that this skipped index is a 7 columns composed index of which the first 6 columns have been skipped.

Now the question turns to: are those 6 leading columns very repetitive(contain few distinct values) so that they can be skipped? Let’s first see the size of this index and how its columns values are distributed respectively:

SQL> @sizeBysegName
Enter value for segment_name : IDX

SEGMENT_TYPE       TABLESPACE_NAME       SEGMENT_NAME     PARTITION    GB
------------------ --------------------- --------------   ----------  -------
INDEX              IDX_TBS                T_IDX                        28.44
                                                                       -------
Total Segment Size                                                     28.44

SQL> col column_name format a20
SQL> select
  2         i.column_name
  3        ,i.column_position
  4       ,s.num_distinct
  5      from
  6         user_ind_columns i
  7       , user_tab_col_statistics s
  8       where
  9        i.column_name = s.column_name
 10       and   index_name = 'T_IDX'
 11       order by column_position asc;

COLUMN_NAME          COLUMN_POSITION NUM_DISTINCT
-------------------- --------------- ------------
BKR_ID                             1         1337
PRD_ID                             2      3342592
PRD_QTY                            3       197008
IS_PROCESED                        4            2
OPR_PRD_QTY                        5       170576
PRD_FMLY                           6            9
PROCESS_ID                         7            1

7 rows selected.

Interestingly, except the first column which is relatively repetitive when compared to the index size, neither the second column which contains more than 3 million of distinct values nor the third and the fifth which count for about 2 hundreds of thousands of distinct values, can be considered as repetitive. So why they have been skipped?

Additionally, skipping an index,say idx(a,b,c) on its column c is somehow rewriting the original query by adding two predicates on the two first index columns. Something as follows:

From

select
    {lits of columns}
from table
where
   c = :1;

To

select
    {lits of columns}
from table
where
    a in {all distinct value of a}
and b in {all distinct value of b}
and c = :1;

The two added predicates explain why the number of distinct values of the index leading edge columns should be very small for the index skip scan path to be considered.

If you want to reproduce such a kind of index operation then here’s the model I have engineered

create table t
as select
      rownum   prd_id
	 ,rownum-1 sec_prd_id
	 ,trunc((rownum-1)/3) prd_qty
	 ,trunc((rownum-1)/3) opr_prd_qty
	 ,mod(rownum,100)     bkr_id
	 ,case mod(rownum,1000000) when  0  then 'x'
	                           when  1  then 'y'
							   when  2  then 'z'
							   when  3  then 'a'
							   when  4  then 'b'
							   when  5  then 'c'
							   else 'd' end  prd_fmly
     ,case rownum when 1 then 0 else 1 end is_procesed
     ,0                                    process_id
from dual
connect by level  'for all columns size 1');
end;

SQL> select
  2         i.column_name
  3        ,i.column_position
  4       ,s.num_distinct
  5       ,s.histogram
  6      from
  7         user_ind_columns i
  8       , user_tab_col_statistics s
  9       where
 10        i.column_name = s.column_name
 11       and   index_name = 'T_IDX'
 12       order by column_position asc;

COLUMN_NAME          COLUMN_POSITION NUM_DISTINCT HISTOGRAM
-------------------- --------------- ------------ ------------
BKR_ID                             1          100 NONE
PRD_ID                             2      1000000 NONE
PRD_QTY                            3       337344 NONE
IS_PROCESED                        4            2 NONE
OPR_PRD_QTY                        5       337344 NONE
PRD_FMLY                           6            7 NONE
PROCESS_ID                         7            1 NONE

7 rows selected.
SQL> set autotrace on explain stat
SQL>  select /*+ opt_param ('optimizer_index_cost_adj' 20) */
  2             t.*
  3          from t
  4          where
  5              t.prd_id in (5507203,78400086,359798,4565189,9495,12215)
  6          and t.process_id = 0
  7          and t.opr_prd_qty  0;

    PRD_ID SEC_PRD_ID    PRD_QTY OPR_PRD_QTY     BKR_ID P IS_PROCESED PROCESS_ID
---------- ---------- ---------- ----------- ---------- - ----------- ----------
     12215      12214       4071        4071         15 d           1          0
      9495       9494       3164        3164         95 d           1          0
    359798     359797     119932      119932         98 d           1          0

---------------------------------------------------------------------
| Id  | Operation                           | Name  | Rows  | Bytes |
---------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |     4 |   120 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T     |     4 |   120 |
|*  2 |   INDEX SKIP SCAN                   | T_IDX |     4 |       |
---------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T"."PROCESS_ID"=0)
       filter(("T"."PRD_ID"=9495 OR "T"."PRD_ID"=12215 OR "T"."PRD_ID"=359798 OR
              "T"."PRD_ID"=4565189 OR "T"."PRD_ID"=5507203 OR "T"."PRD_ID"=78400086) AND
              "T"."OPR_PRD_QTY"0 AND "T"."PROCESS_ID"=0)
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       5026  consistent gets
          0  physical reads
          0  redo size
       1167  bytes sent via SQL*Net to client
        544  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          3  rows processed

Not having a clear answer for why Oracle has considered this index operation, doesn’t pre-empt us from solving this issue. And as explained above, an index skip scan operation being an indicationg of a missing index I have then created the following index (bear in mind that the newly created index takes into account that the original text of query can’t be changed):

SQL> create index idx_t1_usr1 on t(process_id, prd_id, prd_qty) compress 3;

SQL> select /*+ opt_param ('optimizer_index_cost_adj' 20) */
  2         t.*
  3      from t
  4      where
  5          t.prd_id in (5507203,78400086,359798,4565189,9495,12215)
  6      and t.process_id = 0
  7      and t.opr_prd_qty  0;

    PRD_ID SEC_PRD_ID    PRD_QTY OPR_PRD_QTY     BKR_ID P IS_PROCESED PROCESS_ID
---------- ---------- ---------- ----------- ---------- - ----------- ----------
      9495       9494       3164        3164         95 d           1          0
     12215      12214       4071        4071         15 d           1          0
    359798     359797     119932      119932         98 d           1          0

----------------------------------------------------------------------------
| Id  | Operation                            | Name        | Rows  | Bytes |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |             |     4 |   120 |
|   1 |  INLIST ITERATOR                     |             |       |       |
|*  2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T           |     4 |   120 |
|*  3 |    INDEX RANGE SCAN                  | IDX_T1_USR1 |     5 |       |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("T"."OPR_PRD_QTY"0)
   3 - access("T"."PROCESS_ID"=0 AND ("T"."PRD_ID"=9495 OR "T"."PRD_ID"=12215 OR
              "T"."PRD_ID"=359798 OR "T"."PRD_ID"=4565189 OR "T"."PRD_ID"=5507203 OR
              "T"."PRD_ID"=78400086))

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
         18  consistent gets
          7  physical reads
          0  redo size
       1167  bytes sent via SQL*Net to client
        544  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          3  rows processed

January 30, 2016

RAC : Uncached TEMP SPACE

Filed under: Oracle — hourim @ 8:00 am

We parallelized a very big index (171GB) creation using this:

SQL> create UNIQUE index PK_TABLE_XXXX ON TABLE_XXXX
          (COL1
          ,COL2
          ,COL3
          ,COL4
          ,COL5)
     LOCAL
     TABLESPACE TBS_IDX_XXXX
     parallel 8
     NOLOGGING;

SQL> alter index PK_TABLE_XXXX noparallel;

Unfortunately we went in TEMP tablespace shortage after a couple of minutes of run

Error: ORA-12801
------------------------------
ORA-12801: erreur signalée dans le serveur de requête parallèle P001, instance xxxxxx(1)
ORA-01652: impossible d'étendre le segment temporaire de 128 dans le tablespace TEMP

SQL Plan Monitoring Details (Plan Hash Value=762135660)
=============================================================================
| Id |            Operation            |       Name       |  Rows   | Temp  |
|    |                                 |                  | (Estim) | (Max) |
=============================================================================
|  0 | CREATE INDEX STATEMENT          |                  |         |       |
|  1 |   PX COORDINATOR                |                  |         |       |
|  2 |    PX SEND QC (RANDOM)          | :TQ10000         |      3G |       |
|  3 |     PX PARTITION HASH ALL       |                  |      3G |       |
|  4 |      INDEX BUILD UNIQUE (LOCAL) | PK_TABLE_XXXX    |         |       |
|  5 |       SORT CREATE INDEX         |                  |      3G |   32G | -->
|  6 |        TABLE ACCESS FULL        | TABLE_XXXX       |      3G |       |
=============================================================================

This 32GB of maximum TEMP space looks very odd. I have already been working with this data base and I was practically sure that it allows more than this limit. So I looked at the v$sort_segment view:

SQL> compute sum Label 'Total Temp Used' of "Space(GB)" on report
SQL> break on report
SQL> select
       *
    from
    (select
            tablespace_name
           ,inst_id
           ,round((total_blocks*8192)/(1024*1024*1024),2) "Space(GB)"
      from
          gv$sort_segment order by 1,2);

TABLESPACE_NAME                    INST_ID  Space(GB)
------------------------------- ---------- ----------
TEMP                                     1      31.25
TEMP                                     2     656.22
                                           ----------
Total Temp Used                                687.47

Notice this 31.25GB of TEMP Space in instance 1. It looks to be closely related to the limit we’ve hit. And, indeed, the index creating script was started from instance 1 as shown in the corresponding Real Time SQL Monitoring report:

Global Information
------------------------------
 Status              :  DONE (ERROR)
 Instance ID         :  1                    --> spot this
 Session             :  xxxxxx(908:33137)
 SQL ID              :  1h8puyf4b3bw7
 SQL Execution ID    :  16777216
 Execution Started   :  01/25/2016 18:02:40
 First Refresh Time  :  01/25/2016 18:02:40
 Last Refresh Time   :  01/25/2016 18:04:07
 Duration            :  87s
 Module/Action       :  SQL Developer/-
 Service             :  xxxxxx_INST1
 Program             :  SQL Developer

This database is a RAC (11.2.0.3.0) with 2 nodes. It possesses a TEMP tablespace composed with 22 temporary files each of which has 32GB of size. This makes a total available TEMP space of 22*32 = 704GB. Having no clue about the reason for which Oracle has not been able to use the remaining TEMP space from instance 2, i asked to run the same script from instance 2 of the same database:

Global Information
------------------------------
 Status              :  DONE
 Instance ID         :  2   --> second instance
 Session             :  xxxxxx(401:717)
 SQL ID              :  1h8puyf4b3bw7
 SQL Execution ID    :  33554432
 Execution Started   :  01/26/2016 12:02:59
 First Refresh Time  :  01/26/2016 12:03:00
 Last Refresh Time   :  01/26/2016 12:30:07
 Duration            :  1628s
 Module/Action       :  SQL Developer/-
 Service             :  xxxxxx_INST2
 Program             :  SQL Developer       

SQL Plan Monitoring Details (Plan Hash Value=762135660)
=============================================================================
| Id |            Operation            |       Name       |  Rows   | Temp  |
|    |                                 |                  | (Estim) | (Max) |
=============================================================================
|  0 | CREATE INDEX STATEMENT          |                  |         |       |
|  1 |   PX COORDINATOR                |                  |         |       |
|  2 |    PX SEND QC (RANDOM)          | :TQ10000         |      3G |       |
|  3 |     PX PARTITION HASH ALL       |                  |      3G |       |
|  4 |      INDEX BUILD UNIQUE (LOCAL) | PK_TABLE_XXXX    |         |       |
|  5 |       SORT CREATE INDEX         |                  |      3G |   99G |
|  6 |        TABLE ACCESS FULL        | TABLE_XXXX       |      3G |       |
=============================================================================

Notice how the index creation, this time, completes without error, in about 27 minutes and consumes 99GB of TEMP space.
Here’s below the situation of the cached extends in gv$temp_extent_pool view immediately after the index successful creation :

SQL> compute sum Label 'Total Temp Used' of extents_cached on report
SQL> break on report
SQL> select inst_id
              , file_id
              , extents_cached
              , extents_used
        from gv$temp_extent_pool
        order by 1,2;

   INST_ID    FILE_ID EXTENTS_CACHED EXTENTS_USED
---------- ---------- -------------- ------------
         1          1          31994            8
         1          2              4            3
         1          3              0            0
         1          4              0            0
         1          5              3            0
         1          6              0            0
         1          7              1            0
         1          8              0            0
         1          9              0            0
         1         10              0            0
         1         11              0            0
         1         12              0            0
         1         13              0            0
         1         14              0            0
         1         15              0            0
         1         16              0            0
         1         17              0            0
         1         18              0            0
         1         19              0            0
         1         20              0            0
         1         21              0            0
         1         22              0            0
         2          1              0            0
         2          2          31995            0
         2          3          31999            0
         2          4          31999            0
         2          5          31996            0
         2          6          31999            0
         2          7          31998            0
         2          8          31999            0
         2          9          31999            0
         2         10          31999            0
         2         11          31999            0
         2         12          31999            0
         2         13          31999            0
         2         14          31999            0
         2         15          31999            0
         2         16          31999            0
         2         17          31999            0
         2         18          31999            1
         2         19          31999            0
         2         20          31999            0
         2         21          31999            0
         2         22          31999            0
                      --------------
Total Temp                    703973
SQL> select
           inst_id,
           tablespace_name,
           total_blocks,
           used_blocks,
           free_blocks
    from gv$sort_segment;

   INST_ID TABLESPACE_NAME                 TOTAL_BLOCKS USED_BLOCKS FREE_BLOCKS
---------- ------------------------------- ------------ ----------- -----------
         1 TEMP                                 4096256        1536     4094720
         2 TEMP                                86012288         128    86012160

We have a TEMP tablespace of 22 temporary files in a RAC configuration with 2 nodes. We can point out that, in instance 1, only the first temporary file that has been used. While in instance 2 we see that many extents have been uniformly allocated during the index creation.

SQL> select inst_id, trunc(bytes_cached/1024/1024/1024,2) Gbytes_cached from gv$temp_extent_pool;

   INST_ID GBYTES_CACHED
---------- -------------
         1         31.24
         1             0
         1             0
         1             0
         1             0
         1             0
         1             0
         1             0
         1             0
         1             0
         1             0
         1             0
         1             0
         1             0
         1             0
         1             0
         1             0
         1             0
         1             0
         1             0
         1             0
         1             0
         2             0
         2         31.24
         2         31.24
         2         31.24
         2         31.24
         2         31.24
         2         31.24
         2         31.24
         2         31.24
         2         31.24
         2         31.24
         2         31.24
         2         31.24
         2         31.24
         2         31.24
         2         31.24
         2         31.24
         2         31.24
         2         31.24
         2         31.24
         2         31.24
         2         31.24

44 rows selected.

Clearly for a reason I am not aware of, Instance 2 is refusing to uncache its temp extents to instance 1. Thanks to Riyaj Shamsudeen I knew that this situation could be due to the bug n° 14383007 which necessitates a patch to be solved.

January 20, 2016

Natural and Adjusted Hybrid Histogram

Filed under: Oracle,Statistics — hourim @ 7:16 pm

I was going to write a comment in this Jonathan Lewis article and have finally decided to write a blog article because it turned to be a long comment.  In the above mentioned article a reader was wondering why the bucket size of his modified data set is not obeying the minimum bucket size explained by Jonathan. Coincidentally I am writing a second article on Hybrid histogram for allthingsOracle where I have used my proper terminology to define two types of Hybrid histogram: a first type, which I have named ‘’Natural Hybrid’, is close to Jonathan’s original data set. And a second type, which I have named ‘’Adjusted Hybrid’’ is of the same vein as the reader modified data set. By  ‘’Natural Hybrid’’ type  I refer to a data set a data set that doesn’t qualify for a TOP-Frequency histogram because the threshold is greater than the naturally non-adjusted TopNRows. By ‘Adjusted Hybrid’type, I am refering to a data set that initially satisfies the TOP-Frequency threshold but  fails to qualify for a TOP-Frequency histogram because Oracle finds at the middle of the process that the Adjusted TopNRows is greater than the threshold .

Let’s explain with example. Firstly here’s a model that qualify for a “Natural Hybrid” histogram:

SQL> create table t1 (id number, n1 number);
SQL> start InsT1.sql (see downloadable script at the end)

If we gather histogram for this data set of 37 distinct values using 20 buckets we will obtain a HYBRID histogram because the TOP-Frequency threshold accounts for 95 rows while the TOP-20 rows account for 80 rows as shown below:

SQL> select round ((20-1)/20 * 100) threshold from dual;

 THRESHOLD
----------
        95
SQL> select
         sum (cnt) TopNRows
    from (select
            n1
           ,count(*) cnt
         from t1
         group by n1
          order by count(*) desc
          )
   where rownum <= 20;

  TOPNROWS
----------
        80

In order to compute the Hybrid histogram information, Oracle will, in this case, use the bucket-frequency method explained by Jonathan Lewis. This method uses the minimum bucket size of 5 (not valid at the end of the data set though) and the unchanged initial number of bucket of 20 condition is respected. This is what I prefer labelling a Natural Hybrid histogram.

Let’s now use the reader model

create table t_jar (id number, n1 number);
insert into t_jar values (1,5 );
insert into t_jar values (1,5 );
insert into t_jar values (1,7 );
insert into t_jar values (1,7 );
insert into t_jar values (1,7 );
insert into t_jar values (1,7 );
insert into t_jar values (1,10 );
insert into t_jar values (1,12 );
insert into t_jar values (1,15 );
insert into t_jar values (1,15 );
insert into t_jar values (1,15 );
insert into t_jar values (1,20 );
commit;

This new data set of 6 distinct values over 12 rows will normally qualify for a TOP-3 Frequency histogram as it satisfies the threshold formula:

SQL> select round ((3-1)/3 * 12) threshold from dual;

 THRESHOLD
----------
         8
SQL> select
          sum (cnt) TopNRows
     from (select
             n1
            ,count(*) cnt
          from t_jar
          group by n1
          order by count(*) desc
             )
     where rownum >= 3;
  TOPNROWS
----------
         9

However, Oracle will not accept this at face value. It has to check if the low and high values are among the TOP-3 distinct values. If one of these values (or both) are not in the TOP-3, oracle will force it into the histogram, exclude the least repetitive value from the TOP-3, adjust the TopNRows and check again whether theses modifications have not altered the data set so that it still qualify or not for a TOP-Frequency histogram.

Here’s below a snippet of a corresponding dbms_stats trace file

SQL> exec dbms_stats.set_global_prefs ('TRACE', to_char (1+16));

SQL> BEGIN
        dbms_stats.gather_table_stats
          (user, 't_jar', method_opt =>; 'for columns n1 size 3');
     END;
    /
SQL> exec dbms_stats.set_global_prefs('TRACE', null);
DBMS_STATS:  NNV  NDV  AVG  MMX  HST  EP   RP   NNNP IND  CNDV HSTN HSTR  COLNAME
DBMS_STATS:   Y    Y    Y    Y    Y    Y    Y                        Y    N1
DBMS_STATS: Approximate NDV Options
DBMS_STATS: ACL,TOPN,NIL,NIL,RWID,U,U3U
DBMS_STATS: start processing top n values for column N1
DBMS_STATS: topn sql (len: 415):
DBMS_STATS: +++ select /*+
                       no_parallel(t) no_parallel_index(t)
			   dbms_stats cursor_sharing_exact use_weak_name_resl
                       dynamic_sampling(0) no_monitoring
			 xmlindex_sel_idx_tbl no_substrb_pad
			  */
		   substrb(N1,16,0,64),1,240) val,
                 rowidtochar(rowid) rwid
		  from "XXXX&".T_JAR& t
		  where rowid in(chartorowid('AAAJ4MAAEAACN4EAAA')
		              ,chartorowid('AAAJ4MAAEAACN4EAAC')
		    	       ,chartorowid('AAAJ4MAAEAACN4EAAI'))
		order by N1;
DBMS_STATS: remove last bucket: Typ=2 Len=2: c1,10 add: Typ=2 Len=2: c1,15
DBMS_STATS: removal_count: 1 total_nonnull_rows: 12 mnb:  3
DBMS_STATS: adjusted coverage: .667

Here’s the data set

SQL> select *
        from
       (select n1, count(1) cnt
         from t_jar
        group by n1
         order by n1);

        N1        CNT
---------- ----------
         5          2 -> low value
         7          4
        10          1
        12          1
        15          3
        20          1 -> high value
6 rows selected.

And here are the TOP-3 rows of the same data set:

SQL> select
          *
     from (select
             n1
            ,count(*) cnt
          from t_jar
          group by n1
          order by count(*) desc)
     where rownum <= 3;

        N1        CNT
---------- ----------
         7          4
        15          3
         5          2
3 rows selected.

Since the high value is not in the TOP-3 it will be forced into the histogram to the cost of the exclusion of the least repetitive TOP-3 values which is 5 in this case (frequency =2). But, before doing this task, Oracle has to check if, after this high value forcing, the data set is still qualifying for a Top-Frequency using the AdjustedTopNRows

 AdjustedTopNRows = TopNRows – 2 + 1 = 9-2+1 = 8

The AdjustedTopNRows is not any more greater than the threshold of 8 which signifies that Oracle will finally stop collecting TOP-Frequency and transform what it has already gathered into  a HYBRID histogram as shown in the same trace file (Trying to convert frequency histogram to hybrid)

DBMS_STATS: adjusted coverage: .667
DBMS_STATS: hist_type in exec_get_topn: 2048 ndv:6 mnb:3
DBMS_STATS: Evaluating frequency histogram for col: N1
DBMS_STATS:  number of values = 4, max # of buckects = 3, pct = 100, ssize = 12
DBMS_STATS:  Trying to convert frequency histogram to hybrid
DBMS_STATS:  > cdn 10, popFreq 4, popCnt 1, bktSize 4, bktSzFrc 0
DBMS_STATS:  Evaluating hybrid histogram:  cht.count 3, mnb 3, ssize 10, min_ssize 12, appr_ndv  TRUE, ndv 4, selNdv 1,
selFreq 4,
pct 100, avg_bktsize 3, csr.hreq TRUE, normalize TRUE
DBMS_STATS:   Histogram gathering flags: 7
DBMS_STATS:  Accepting histogram
DBMS_STATS: done_hist in process_topn: TRUE  csr.ccnt: 1
DBMS_STATS: Mark column N1 as top N computed
DBMS_STATS: Skip topn computed column N1 numhist: 0

This is what I have labelled an ‘’Adjusted Hybrid’’ histogram which is collected using a method that seems not involving the bucket size and which tends to be a top-frequency to hybrid conversion.

SELECT
        (row_number() over(order by ept_nbr)-1) NumBucket
        ,ept_nbr
        ,ept_act_val
        ,rpt_cnt
        ,ept_nbr - (lag(ept_nbr,1,0) over(order by ept_nbr)) "new bucket size"
        ,bucket_size "original bucket_size"
    FROM
        (SELECT
             ah.endpoint_number            ept_nbr
            ,ah.endpoint_actual_value      ept_act_val
            ,lag(ah.endpoint_number,1,0) over(order by ah.endpoint_number) ept_lag
            ,ah.endpoint_repeat_count rpt_cnt
            ,at.sample_size/at.num_buckets bucket_size
         FROM
            user_tab_histograms      ah
           ,user_tab_col_statistics  at
         WHERE ah.table_name  = at.table_name
         AND ah.column_name = at.column_name
         AND ah.table_name  = 'T_JAR'
         AND ah.column_name = 'N1'
       ) ORDER BY ept_nbr;

NUMBUCKET    EPT_NBR EPT_ACT_VA    RPT_CNT new bucket size original bucket_size
---------- ---------- ---------- ---------- --------------- --------------------
         0          2 5                   2               2                    4
         1          6 7                   4               4                    4
         2         10 20                  1               4                    4

In passing, the adjusted coverage mentioned in the above trace file is nothing than

AdjustedTopNRows/num_rows = 8/12 = 0.667

January 7, 2016

12c TOP-Frequency and the ’’half least popular value’’

Filed under: Oracle — hourim @ 1:27 pm

Asking Oracle 12c to collect histogram for a data set with a number of distinct values (NDV) greater than the supplied number of buckets (N) will probably result in Hybrid histogram. There is indeed a probability that a TOP-Frequency histogram will be preferred to Hybrid histogram. To qualify for this alternative the data set must fulfil a pre-requisite: its TOP N distinct values should occupy a number of rows exceeding a certain threshold. If this pre-requisite is satisfied a kind of frequency histogram is collected for the TOP N values; hence the TOP-Frequency label. The remaining insignificant values will be neglected and not captured into the histogram tables. However, if the extreme values of the data set, that is the low and high values, are among the insignificant values, they will not be neglected. Tow TOP N distinct values will be selected to serve as a victim, excluded from the histogram information and replaced in the TOP N captured values by the low and high values. These victim values are those with the lowest frequency of appearance in the data set. Whether the low (or high) value is naturally present or forced in the TOP N captured values determines the threshold value and the cardinality estimate of the non-popular non-captured values and that of the low (or high) value as well.

Let’s put all what precedes in action.

create table TopFreq3 as
select
    rownum n1
	, case when mod(rownum, 100000)   = 0 then   90
	       when mod(rownum, 10000)    = 0 then   180
		  when mod(rownum, 1000)= 0 then   84
              when mod(rownum, 100)      = 0 then   125
              when mod(rownum,50)        = 2 then   7
              when mod(rownum-1,80)      = 2 then   22 
              when mod(rownum, 10)       = 0 then   19
              when mod(rownum-1,10)      = 5  then   15
              when mod(rownum-1,5)       = 1  then   11
              when trunc((rownum -1/3)) < 5  then   25
              when trunc((rownum -1/5)) < 20  then   33
	  else 42 
        end n2    
from dual
connect by level <= 2e2; 

SELECT n2, count(*)
FROM TopFreq3
GROUP BY n2
ORDER BY n2;

        N2   COUNT(*)
---------- ----------
         7          4 --> low value
        11         36
        15         20
        19         18
        22          3
        25          3
        33          8
        42        106
       125          2 --> high value

9 rows selected.

As shown above the engineered data set has 9 distinct values (over 200 rows) of which the TOP 8 seem to qualify for a TOP-Frequency histogram:

 undefine nb
undefine num_rows

 SELECT
       &&nb numberOfBuckets
      ,TopNRows
      ,round((1-(1/&&nb))* &&num_rows) Threshold
      ,case
        when (TopNRows – round((1-(1/&&nb))* &&num_rows)) > 0 then 'Qualify For TOP Frequency'
      else
         'Do not qualify For TOP Frequency'
      end TOPFreqDecision
   FROM
   (select
         sum (cnt) TopNRows
     from (select
             n2
           , count(*) cnt
         from TopFreq3
         group by n2
         order by count(*) desc
         )
      where rownum <= &&nb
    );
Enter value for nb: 8
Enter value for num_rows: 200

NUMBEROFBUCKETS   TOPNROWS  THRESHOLD TOPFREQDECISION
--------------- ---------- ---------- --------------------------------
              8        198        175 Qualify For TOP Frequency

The TOPNROWS being greater than the THRESHOLD a TOP-FREQUENCY will be preferred to the HYBRID histogram as shown in the following:

begin
  dbms_stats.gather_table_stats
       (user,'TopFreq3'
       ,method_opt =>'for columns n2 size 8');
end;
/

select
    column_name
   ,num_distinct
   ,num_buckets
   ,sample_size
   ,histogram
from
   user_tab_col_statistics
   where table_name = 'TOPFREQ3'
   and column_name  = 'N2';

COLUMN_NAME  NUM_DISTINCT NUM_BUCKETS SAMPLE_SIZE HISTOGRAM
------------ ------------ ----------- ----------- ---------------
N2                      9           8         200 TOP-FREQUENCY

Indeed as expected Oracle has gathered a TOP-FREQUENCY histogram for the TOP 8 distinct values as shown below:

select
       endpoint_actual_value value,
       endpoint_number,
       endpoint_number - (lag(endpoint_number,1,0) 
                         over(order by endpoint_number)) value_count
    from	
       user_tab_histograms
    where
       table_name = 'TOPFREQ3'
    and column_name = 'N2'
    order by
       endpoint_number
    ;

VALUE      ENDPOINT_NUMBER VALUE_COUNT
---------- --------------- -----------
7                        4           4 
11                      40          36
15                      60          20
19                      78          18
25                      81           3
33                      89           8
42                     195         106
125                    196           1 --> high value with a forced frequency 1

8 rows selected.

Notice by the way, how, despite the high value 125 is insignificant (count =2) it has been captured into the histogram with bizarrely a forced frequency of 1. For the sake of clarity let’s reprint below how the n2 column is spread in TopFreq3 table:

        N2   COUNT(*)
---------- ----------
         7          4 --> low value
        11         36
        15         20
        19         18
        22          3 --> first least popular value
        25          3 
        33          8
        42        106
       125          2 --> high value
9 rows selected.

Again notice how, the value 22 which is the first least popular value, has been removed from the TOP 8 to let its place for the insignificant high value 125 with a forced frequency of 1. This is how the TOP-Frequency seems to work. If the low and/or high value of the data set is naturally present in the TOP-N values then there will be no value exclusion-substitution. If however one of these values (or both) is outside the TOP-N values, then it must be included in the histogram information in place of the least popular value.

Let’s now check what cardinality estimates will be calculated for the non-captured values (including the excluded least popular value)

explain plan for select count(1) from TopFreq3 where n2= 22;

select * from table(dbms_xplan.display);

-----------------------------------------------
| Id  | Operation          | Name     | Rows  |
-----------------------------------------------
|   0 | SELECT STATEMENT   |          |     1 |
|   1 |  SORT AGGREGATE    |          |     1 |
|*  2 |   TABLE ACCESS FULL| TOPFREQ3 |     4 |
-----------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("N2"=22)

How Oracle did work out this cardinality of 4? In this case Oracle is simply using the following formula:

E-Rows = num_rows * NewDensity

Where NewDensity is:

NewDensity = (sample_size-TopNRows)/(num_distinct-num_buckets)/sample_size
NewDensity = (200-198)/(9-8)/200 = .01

Finally applying the above mentioned cardinality estimation we get this:

E-Rows = num_rows * NewDensity
E-Rows = 200 * 0.01 = 2

But this is not equal to the correct cardinality estimation of 4. In fact, when one of the extreme values are forced into the histogram information the threshold (or TopNRows) are slightly adapted to take into account the exclusion-inclusion operation done between the extreme value and the least popular value:

AdjustedTopNRows = TopNRows – (count of least popular value) + 1 (forced value of 
                                                                  extreme value)

AdjustedTopNRows = 198 – 3 + 1 = 196

Apply this to the above cardinality estimation formula and you will get the correct estimation as shown below:

NewDensity = (200- 196)/(9-8)/200   = .02
E-Rows     = num_rows * NewDensity  = 200 * 0.02 = 4

And this is clearly backed by the corresponding 10053 trace file (spot the NewDensity and the BktCnt):

SINGLE TABLE ACCESS PATH 
  Single Table Cardinality Estimation for TOPFREQ3[TOPFREQ3] 
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Column (#2): 
    NewDensity:0.020000, OldDensity:0.002500 BktCnt:196.000000, PopBktCnt:195.000000, PopValCnt:7, NDV:9
  Column (#2): N2(NUMBER)
    AvgLen: 4 NDV: 9 Nulls: 0 Density: 0.020000 Min: 7.000000 Max: 125.000000
    Histogram: Top-Freq  #Bkts: 196  UncompBkts: 196  EndPtVals: 8  ActualVal: yes
  Table: TOPFREQ3  Alias: TOPFREQ3
    Card: Original: 200.000000  Rounded: 4  Computed: 4.000000  Non Adjusted: 4.000000

Bottom line: this article aims to restrain the enthusiasm of those using the ‘‘half least popular value’’ rule to get the selectivity of a non-popular frequency histogram before they extend it to the non-popular TOP-Frequency. It seems that this is not correct. And all in all, it seems that a non-popular TOP-FREQUENCY value is nothing else than a HYBRID non-popular value.

December 31, 2015

2015 in review

Filed under: Oracle — hourim @ 6:00 pm

This is the 2015 annual report for my blog. I hope I will continue sharing and learning interesting things with you.

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 32,000 times in 2015. If it were a concert at Sydney Opera House, it would take about 12 sold-out performances for that many people to see it.

Click here to see the complete report.

November 28, 2015

All on Adaptive and Extended Cursor Sharing presentation

Filed under: Oracle — hourim @ 8:26 am

Here’s a presentation I did 3 days ago (26/11/2015) for French Oracle Meetup (Lille). It gives details on how Oracle (Adaptive Cursor Sharing part) monitors a bind sensitive cursor before making it bind aware and how Oracle (Extended Cursor Sharing part) monitors a bind aware cursor before optimizing a new optimal execution plan or sharing an existing one

October 1, 2015

Oracle Optimizer and SPM plan interaction

Filed under: Oracle — hourim @ 5:25 pm

Continuing in the inspiration instilled into me by Dominic Brooks’ post on SQL Plan Management choices, I decided to picture the Oracle CBO behavior in presence of enabled and accepted SPM plan(s) baseline:

CBO-SPM interaction diagram

The right part of the picture, when triggered, demonstrates the parsing penalty you will have to pay before running your SQL query. Particularly when there are multiple accepted and enabled SPM plans the CBO has to try reproducing and costing all of them before making its final decision. The picture also shows that under all circumstances the CBO will start first by compiling its execution plan as if it is not constrained by any SPM plan. This clearly demonstrates that if your query is suffering from a hard parsing execution time (when the plan generation takes a lot of time) then SPM will not help you. This is where the mantra “When you can hint it then Baseline it” ceases to be accurate.

September 18, 2015

Basic versus OLTP compression

Filed under: Oracle — hourim @ 11:53 am

If you are going to archive a table having a very large amount of data from a running system into a historical table and you expect from this archived table to be read-only then you have better to use the free BASIC compression mode instead of the paid OLTP mode. This article aims to show you the reason for this compression mode preference based on real life example.

In order to take advantage from the free BASIC compression mode you need first to insert data using a direct path load (this is why this compression mode used to be known as compression for direct path load operations or bulk load operations) and ensure that there is no trigger or integrity constraint on the inserted table that will make Oracle silently ignoring your requested direct path load mode. In passing, in contrast to the preceding Oracle releases, 12.1.0.2 Oracle database will inform you when a direct path is ignored via the Note at the bottom of the corresponding insert execution plan:

Note
-----
   - PDML disabled because triggers are defined
   - Direct Load disabled because triggers are defined

Here’s an example of BASIC compression I’ve executed on a 12.1.0.2 database?

SELECT  
    t.owner,
    t.table_name,
    t.compress_for,
    s.bytes/1024/1024/1024 GB
FROM 
    dba_tables t ,
    dba_segments s
WHERE 
    s.segment_name = t.table_name
AND s.owner        = 'XXX'
AND t.table_name   = 'T1'
ORDER BY 4;
   
OWNER      TABLE_NAME   COMPRESS_FOR    GB
---------- ------------ ------------- ----------
XXX        T1                         263.114136 

263 GB worth of data of which we need to keep only 2 years and archive the rest.

The first step in this process was to create an empty table cloned from the production table and define it to accept the BASIC mode compression:

-- create a table compressed with BASIC mode
CREATE TABLE T1_HIST
      COMPRESS BASIC
      TABLESPACE HIST_TABSPACE 
         AS
      SELECT * FROM T1
      WHERE 1 = 2;   

As far as I was going to send a huge amount of data into this archived table I decided to use a parallel insert which needs to be preceded by enabling parallel DML either by altering the session or by using the appropriate hint (for 12c)

-- enable parallel dml 
SQL> alter session enable parallel dml;        

The way is now paved to start sending historical data into their new destination:

-- direct path load data older than 2 years into the archived table
INSERT /*+ append parallel(bsic,4) */ 
  INTO T1_HIST bsic
  SELECT
          /*+ parallel(oltp,4) */
              *
    FROM  T1 oltp
       WHERE HIST_DAT > add_months(sysdate,-24);

2,234,898,367 rows created.

More than 2 billion of rows direct path loaded.

And the agreeable surprise is:

SQL> SELECT
    t.owner,
    t.table_name,
    t.compress_for,
    s.bytes/1024/1024/1024 GB
FROM
    dba_tables t ,
    dba_segments s
WHERE
    t.compress_for ='BASIC'
AND
    s.segment_name   = t.table_name
AND s.owner       = 'XXX'
ORDER BY 4;

OWNER      TABLE_NAME   COMPRESS_FOR   GB
---------- ------------ ------------- ----------
XXX         T1_HIST     BASIC         53.2504272 

We went from a source table with 263GB worth of data to a cloned table compressed using BASIC mode to end up with only 53GB or historical data.

If you are wondering how much rows I have not send into the archived table then here’s

SQL> select /*+ parallel(8) */ count(1) 
     from T1
     where HIST_DAT <= add_months(sysdate,-24);
   
7,571,098 

This means that using the BASIC free compression mode I have archived almost all rows from the production table and succeeded to pack the initial 263GB into 53GB only. That’s an excellent compression ratio of 5 (we have divided the initial size by 5). Though that Oracle is saying that the compression ration depends on the nature of your data and it could range between a factor of 2x to 4x.

Should you have used the paid OLTP compression mode you would have got an archived table with a size approximatively 10% higher (~60GB). This is due to Oracle considering the table compressed in BASIC mode to be read only and not subject to any update with a default PCT_FREE set to 0 behind the scene:

SQL> select table_name, pct_free
    from dba_tables
    where table_name = 'T1_HIST';

TABLE_NAME    PCT_FREE
----------- ----------
T1_HIST          0 

As you can see, if you intend to archive a huge amount of data into a read only table, want to gain disk space with this operation and you don’t want to pay for the OLTP compression then you can opt for the free BASIC compression mode.

There are few interesting things that come up along with this archiving process. Particularly the new 12c way (HYBRID TSM/HWMB) Oracle is using to keep down the number of new extents and to avoid HV enqueue wait event during parallel direct path load with a high DOP across several instances:

SQL Plan Monitoring Details (Plan Hash Value=3423860299)
====================================================================================
| Id |              Operation               |   Name   |  Rows   |Execs |   Rows   |
|    |                                      |          | (Estim) |      | (Actual) |
====================================================================================
|  0 | INSERT STATEMENT                     |          |         |    5 |        8 |
|  1 |   PX COORDINATOR                     |          |         |    5 |        8 |
|  2 |    PX SEND QC (RANDOM)               | :TQ10000 |    819M |    4 |        8 |
|  3 |     LOAD AS SELECT (HYBRID TSM/HWMB) |          |         |    4 |        8 |
|  4 |      OPTIMIZER STATISTICS GATHERING  |          |    819M |    4 |       2G |
|  5 |       PX BLOCK ITERATOR              |          |    819M |    4 |       2G |
|  6 |        TABLE ACCESS FULL             | T1       |    819M |  442 |       2G |
====================================================================================

Predicate Information (identified by operation id):
---------------------------------------------------
 
   6 - access(:Z>=:Z AND :Z<=:Z) filter("HIST_DAT">ADD_MONTHS(SYSDATE@!,-24))

Note
-----
   - Degree of Parallelism is 4 because of table property

But this will be detailed in a separate blog article.

September 5, 2015

Bind aware secret sauce (again)

Filed under: Oracle — hourim @ 8:21 am

I am sure many of you have already become bored by my posts on adaptive cursor sharing. I hope this article will be the last one:-). In part III of the installment I was unable to figure out the secret sauce Oracle is using to mark a cursor bind aware when count of the 3 buckets of a bind sensitive cursor are greater than 0. For those who want to know what bucket and count represent they can start by reading part I and part II

Thanks to a comment dropped by an anonymous reader coupled with my understanding of the adaptive cursor sharing mechanism I think I have succeeded to figure out that secret sauce. My goal is to publish it and let readers break it and invalidate it.

To make things simple I created the following function

------------------------------------------------------------------------------
-- File name:   fv_will_cs_be_bind_aware
-- Author   :   Mohamed Houri (Mohamed.Houri@gmail.com)
-- Date     :   29/08/2015
-- Purpose  :   When supplied with 3 parameters 
--                   pin_cnt_bucket_0 : count of bucket_id n°0
--                   pin_cnt_bucket_1 : count of bucket_id n°1
--                   pin_cnt_bucket_2 : count of bucket_id n°2
-- 
--              this function will return a status:
--
--              'Y' if the next execution at any bucket_id will mark the cursor bind aware          
--               
--              'N' if the next execution any bucket_id will NOT mark the cursor bind aware                
--
--------------------------------------------------------------------------------
create or replace function fv_will_cs_be_bind_aware
  (pin_cnt_bucket_0 in number
  ,pin_cnt_bucket_1 in number
  ,pin_cnt_bucket_2 in number)
return 
   varchar2
is
  lv_will_be_bind_aware 
                 varchar2(1) := 'N';
  ln_least_0_2   number      := least(pin_cnt_bucket_0,pin_cnt_bucket_2);
  ln_great_0_2   number      := greatest(pin_cnt_bucket_0,pin_cnt_bucket_2);
 
begin
 if pin_cnt_bucket_0 + pin_cnt_bucket_2 >= pin_cnt_bucket_1
  and ln_least_0_2 >= ceil ((ln_great_0_2-pin_cnt_bucket_1)/3)
  then
    return 'Y';
  else
    return 'N';
  end if;
end fv_will_cs_be_bind_aware;

If you have a cursor with a combination of 3 buckets having a count > 0 and you want to know whether the next execution will mark the cursor bind aware or not then you have just to do this:

SQL> select fv_will_cs_be_bind_aware(10,1,3) acs from dual;

ACS
---
Y

Or this

SQL> select fv_will_cs_be_bind_aware(10,1,2) acs from dual;

ACS
---
N

In its first call the function is indicating that the next cursor execution will compile a new optimal plan while the second call indicates that the existing child cursor will still be shared.

It’s now time to practice:

SQL> alter session set cursor_sharing=FORCE;

SQL> select count(1) from t_acs where n2 = 100;

  COUNT(1)
----------
       100

SQL> select
  2      child_number
  3     ,bucket_id
  4     ,count
  5  from
  6     v$sql_cs_histogram
  7  where sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           0          0          1
           0          1          0
           0          2          0

SQL> select count(1) from t_acs where n2 = 100;

  COUNT(1)
----------
       100
SQL> /

-- repeat this 7 times

SQL> select
  2      child_number
  3     ,bucket_id
  4     ,count
  5  from
  6     v$sql_cs_histogram
  7  where sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           0          0         10 --> 10 executions at bucket_id 0
           0          1          0
           0          2          0

Now change the bind variable value so that the bucket_id n°1 will be incremented

SQL> select count(1) from t_acs where n2 = 10000;

  COUNT(1)
----------
    100000

SQL> select
  2      child_number
  3     ,bucket_id
  4     ,count
  5  from
  6     v$sql_cs_histogram
  7  where sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           0          0         10
           0          1          1 --> 1 executions at bucket_id 1
           0          2          0

Now change again the bind variable value so that the bucket_id n°2 will be incremented

SQL> select count(1) from t_acs where n2 = 1000000;

  COUNT(1)
----------
   1099049

SQL> select count(1) from t_acs where n2 = 1000000;

  COUNT(1)
----------
   1099049

SQL> select
  2      child_number
  3     ,bucket_id
  4     ,count
  5 from
  6     v$sql_cs_histogram
  7  where sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           0          0         10
           0          1          1
           0          2          2 --> 2 executions at bucket_id 2

If, at this stage, you want to know whether the next execution at bucket id n°2 will mark the cursor bind aware or not then make a call to the function:

SQL> select fv_will_cs_be_bind_aware(10,1,2) acs from dual;

ACS
----
N

No, it will not and here’s below the proof:

SQL> select count(1) from t_acs where n2 = 1000000;

  COUNT(1)
----------
   1099049

SQL> select
  2      child_number
  3     ,bucket_id
  4     ,count
  5  from
  6     v$sql_cs_histogram
  7  where sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           0          0         10
           0          1          1
           0          2          3

And what about the next execution, say at bucket_id n° 0?

SQL> select fv_will_cs_be_bind_aware(10,1,3) acs from dual;

ACS
----
Y

The function is indicating that the next execution will compile a new child cursor; let’s check:

SQL> select count(1) from t_acs where n2 = 100;

  COUNT(1)
----------
       100

SQL> select
  2      child_number
  3     ,bucket_id
  4     ,count
  5  from
  6     v$sql_cs_histogram
  7  where sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           1          0          1
           1          1          0
           1          2          0
           0          0         10
           0          1          1
           0          2          3

Yes it did.

Can we assume from a single test that this function is reliable? No.

You want another example? Here it is:

SQL> -- run the following sql 19 times at bucket_id n°0
SQL> select count(1) from t_acs where n2 = 100;

SQL> -- run the same sql 6 times at bucket_id n°1
SQL> select count(1) from t_acs where n2 = 10000;

SQL> -- run the same sql 2 times at bucket_id n°2
SQL> select count(1) from t_acs where n2 = 1000000;

And here’s the resulting cursor sharing picture:

SQL> select
  2      child_number
  3     ,bucket_id
  4     ,count
  5 from
  6     v$sql_cs_histogram
  7 where  sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           0          0         19
           0          1          6
           0          2          2

Will the next execution compile a new execution plan?

SQL> select fv_will_cs_be_bind_aware(19,6,2) acs from dual;

ACS
---
N

No, it will not as proofed here below:

SQL> select count(1) from t_acs where n2 = 1000000;

  COUNT(1)
----------
   1099049

SQL> select
  2      child_number
  3     ,bucket_id
  4     ,count
  5  from
  6     v$sql_cs_histogram
  7  where  sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           0          0         19
           0          1          6
           0          2          3

And what about the next execution at the same bucket_id n°2? And the next next execution?

SQL> select fv_will_cs_be_bind_aware(19,6,3) acs from dual;

ACS
---
N

SQL> select fv_will_cs_be_bind_aware(19,6,4) acs from dual;

ACS
---
N

And the next execution?

SQL> select fv_will_cs_be_bind_aware(19,6,5) acs from dual;

ACS
----
Y

At the (bucket_id, count) situation shown below the function is indicating that the next execution will mark the cursor bind aware and compile a new execution plan:

SQL> select
  2     child_number
  3    ,bucket_id
  4    ,count
  5  from
  6    v$sql_cs_histogram
  7  where  sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           0          0         19
           0          1          6
           0          2          5

Isn’t it?

SQL> select count(1) from t_acs where n2 = 1000000;

  COUNT(1)
----------
   1099049

SQL> select
  2      child_number
  3     ,bucket_id
  4     ,count
  5  from
  6     v$sql_cs_histogram
  7  where  sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           1          0          0
           1          1          0
           1          2          1
           0          0         19
           0          1          6
           0          2          5

Yes it did as you can point it out via the apparition of the new child cursor n°1

Want another example? Here’s

SQL> select
        child_number
       ,bucket_id
       ,count
    from
        v$sql_cs_histogram
    where sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           0          0          3
           0          1          1
           0          2         11

SQL> select fv_will_cs_be_bind_aware(3,1,11) acs from dual;

ACS
---
N

SQL> select count(1) from t_acs where n2 = 10000;

  COUNT(1)
----------
    100000

SQL> select
        child_number
       ,bucket_id
       ,count
    from
        v$sql_cs_histogram
    where sql_id = '7ck8k47bnqpnv';

CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           0          0          3
           0          1          2
           0          2         11

SQL> select fv_will_cs_be_bind_aware(3,2,11) acs from dual;

ACS
---
Y

SQL> select count(1) from t_acs where n2 = 1000000;

  COUNT(1)
----------
   1099049

SQL> select
        child_number
       ,bucket_id
       ,count
    from
        v$sql_cs_histogram
    where sql_id = '7ck8k47bnqpnv';


CHILD_NUMBER  BUCKET_ID      COUNT
------------ ---------- ----------
           1          0          0
           1          1          0
           1          2          1
           0          0          3
           0          1          2
           0          2         11

I am sure that someone will come up with a simple situation where the function is returning a wrong result. Bear in mind that this is what I want and you’re welcome.

Footnote

If you want to break this function then here’s the model you can use (you need to have histogram on n2 column)

create table t_acs(n1  number, n2 number);

BEGIN
     for j in 1..1200150 loop
      if j = 1 then
       insert into t_acs values (j, 1);
      elsif j>1 and j<=101 then insert into t_acs values(j, 100); elsif j>101 and j<=1101 then insert into t_acs values (j, 1000); elsif j>10001 and j<= 110001 then
      insert into t_acs values(j,10000);
     else
      insert into t_acs values(j, 1000000);
     end if;
    end loop;
   commit;
END;
/ 
create index t_acs_i1 on t_acs(n2);

Update : 08/09/2015 : added a supplementary if condition to the function

Next Page »

The Rubric Theme. Create a free website or blog at WordPress.com.

Tony's Oracle Tips

Tony Hasler's light hearted approach to learning about Oracle

Richard Foote's Oracle Blog

Focusing Specifically On Oracle Indexes, Database Administration and Some Great Music

Hatem Mahmoud Oracle's blog

Just another Oracle blog : Database topics and techniques

Mohamed Houri’s Oracle Notes

Qui se conçoit bien s’énonce clairement

Oracle Diagnostician

Performance troubleshooting as exact science

Raheel's Blog

Things I have learnt as Oracle DBA

Coskan's Approach to Oracle

What I learned about Oracle

So Many Oracle Manuals, So Little Time

“Books to the ceiling, Books to the sky, My pile of books is a mile high. How I love them! How I need them! I'll have a long beard by the time I read them”—Lobel, Arnold. Whiskers and Rhymes. William Morrow & Co, 1988.

EU Careers info

Your career in the European Union

Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Oracle Scratchpad

Just another Oracle weblog

Tanel Poder's Performance & Troubleshooting blog

Linux, Oracle, Exadata and Hadoop.

OraStory

Dominic Brooks on Oracle Performance, Tuning, Data Quality & Sensible Design ... (Now with added Sets Appeal)

Follow

Get every new post delivered to your Inbox.

Join 171 other followers