Mohamed Houri’s Oracle Notes

June 27, 2021

Why my execution plan is not in AWR?

Filed under: ASH,AWR — hourim @ 12:56 pm

I realized the other week while following the Diagnosing Oracle RAC Performance using AWR reports webinar that there are many persons out there asking why, despite their sql_id is present in dba_hist_active_sess_history (AWR) they are, however, unable to get its corresponding execution plan from AWR. This is the third situation, in just a couple of months, where I have seen this question being asked. It’s now time to fill this gap.

In order to answer this question one needs to know that dba_hist_active_sess_history gets its data from gv$active_session_history (ASH) while dba_hist_sqlstat and cie get their data from a bunch of other dynamic performance view like v$sql, v$version, etc..

In this context, rows flagged to be copied into AWR are dumped from gv$active_session_history into dba_hist_active_sess_history, not really on the fly, but with a relatively small time-delay which I am not able to calculate. The remaining dynamic performance views (v$sql, v$session etc…) are, however, dumped into AWR historical tables (dba_hist_sqlstat, etc…) only at AWR sample time, typically every hour.

Moreover, you know that Oracle is not flushing data from v$active_session_history the same way it does with v$sql, v$session, and with other performance dynamic views. The rate at which data are flushed from ASH depends strongly on the size defined by the value of the _ash_size parameter

SQL> @pd _ash_size
Show all parameters and session values from x$ksppi/x$ksppcv...

NAME       VALUE     DESCRIPTION
 ---------- --------- -------------------------------------------------------------
 _ash_size  1048618   To set the size of the in-memory Active Session History buffers

It depends also on how much ASH is stressed so that emergency flush are done

SQL> select
        inst_id
       ,total_size/power(1024,2) MB
       ,awr_flush_emergency_count
     from gv$ash_info;  2    3    4    5

   INST_ID         MB AWR_FLUSH_EMERGENCY_COUNT
---------- ---------- -------------------------
         1          4                         0

But, typically, in real life running systems, v$active_session_history almost always retains data for more than a couple of hours (some time a whole day) allowing, as such, Oracle to dump all flagged ASH records (is_awr_sample) into dba_hist_active_sess_history without any loss.

On the other hand, you can’t possibly ignore that flushing the other dynamic performs view follows a completely different algorithm strongly related to the activity of the application. Consequently, it is very plausible that, while a sql_id is dumped into dba_hist_active_sess_history with a relatively small delay, the same sql_id could be flushed out from v$sql one hour later when the AWR snapshot is taken. This is why one cannot get the AWR execution plan of a sql_id present in dba_hist_active_sess_history.

Let’s demonstrate this with a simple model:

create table t1
  as
  with generator as (
      select  --+ materialize
          rownum id
      from dual
      connect by
          level <= 10000)
  select
      rownum                  id,
      trunc(dbms_random.value(1,1000))    n1,
      lpad(rownum,10,'0') small_vc,
      rpad('x',100)       padding
  from
      generator   v1,
      generator   v2
  where
      rownum <= 1000000;

create index t1_n1 on t1(id, n1);
 
create table t2
   as
   with generator as (
       select  --+ materialize
           rownum id
       from dual
       connect by
           level <= 10000)
   select
       rownum                  id,
       trunc(dbms_random.value(10001,20001))   x1,
       lpad(rownum,10,'0') small_vc,
       rpad('x',100)       padding
   from
       generator   v1,
       generator   v2
   where
       rownum <= 1000000;
 
create index t2_i1 on t2(x1);

I have created two tables with two indexes and launched the following query

select *
   from t1 
 where id 
   in (select id from t2 where x1 = 17335)
order by id;

93 rows selected.

Elapsed: 00:03:05.22

But, before executing the above query, I have changed the optimizer mode from all_rows to first_rows so that a costly INDEX FULL SCAN will be used to avoid the order by operation when sorting the result by id.

While the query was running I executed the following queries against ASH and AWR to see how the dumping is going on:

SELECT    
    sample_time
   ,is_awr_sample
   ,COUNT(1)
FROM
    gv$active_session_history
WHERE
    sql_id = '069t64t6ws5mx'
AND  is_awr_sample = 'Y'   
GROUP BY
    sample_time
    ,is_awr_sample
order by sample_time ;

SAMPLE_TIME                   I   COUNT(1)
----------------------------- - ----------
26-JUN-21 03.17.29.878 PM     Y          1
26-JUN-21 03.17.40.117 PM     Y          1
26-JUN-21 03.17.50.357 PM     Y          1
26-JUN-21 03.18.00.598 PM     Y          1
26-JUN-21 03.18.10.839 PM     Y          1
26-JUN-21 03.18.21.141 PM     Y          1
26-JUN-21 03.18.31.510 PM     Y          1
26-JUN-21 03.18.41.750 PM     Y          1
26-JUN-21 03.18.51.991 PM     Y          1
26-JUN-21 03.19.02.229 PM     Y          1
26-JUN-21 03.19.12.470 PM     Y          1
26-JUN-21 03.19.22.709 PM     Y          1
26-JUN-21 03.19.32.949 PM     Y          1
26-JUN-21 03.19.43.188 PM     Y          1
26-JUN-21 03.19.53.493 PM     Y          1

8 rows selected.

SELECT
    sql_id,
    sample_time
FROM
    dba_hist_active_sess_history
WHERE
    sql_id = '069t64t6ws5mx'
ORDER BY
    2;

no rows selected

SQL> select * from table(dbms_xplan.display_awr('069t64t6ws5mx'));

no rows selected;

And so on until the end of this query

SELECT    
    sample_time
   ,is_awr_sample
   ,COUNT(1)
FROM
    gv$active_session_history
WHERE
    sql_id = '069t64t6ws5mx'
AND  is_awr_sample = 'Y'   
GROUP BY
    sample_time
    ,is_awr_sample
order by sample_time ;


SAMPLE_TIME                 I   COUNT(1)
--------------------------- - ----------
26-JUN-21 03.17.29.878 PM   Y          1
26-JUN-21 03.17.40.117 PM   Y          1
26-JUN-21 03.17.50.357 PM   Y          1
26-JUN-21 03.18.00.598 PM   Y          1
26-JUN-21 03.18.10.839 PM   Y          1
26-JUN-21 03.18.21.141 PM   Y          1
26-JUN-21 03.18.31.510 PM   Y          1
26-JUN-21 03.18.41.750 PM   Y          1
26-JUN-21 03.18.51.991 PM   Y          1
26-JUN-21 03.19.02.229 PM   Y          1
26-JUN-21 03.19.12.470 PM   Y          1
26-JUN-21 03.19.22.709 PM   Y          1
26-JUN-21 03.19.32.949 PM   Y          1
26-JUN-21 03.19.43.188 PM   Y          1
26-JUN-21 03.19.53.493 PM   Y          1
26-JUN-21 03.20.03.733 PM   Y          1
26-JUN-21 03.20.14.102 PM   Y          1
26-JUN-21 03.20.24.342 PM   Y          1

18 rows selected.

SELECT
    sql_id,
    sample_time
FROM
    dba_hist_active_sess_history
WHERE
    sql_id = '069t64t6ws5mx'
ORDER BY
    2;

SQL_ID        SAMPLE_TIME
------------- -----------------------------
069t64t6ws5mx 26-JUN-21 03.17.29.878 PM
069t64t6ws5mx 26-JUN-21 03.17.40.117 PM
069t64t6ws5mx 26-JUN-21 03.17.50.357 PM
069t64t6ws5mx 26-JUN-21 03.18.00.598 PM
069t64t6ws5mx 26-JUN-21 03.18.10.839 PM
069t64t6ws5mx 26-JUN-21 03.18.21.141 PM
069t64t6ws5mx 26-JUN-21 03.18.31.510 PM
069t64t6ws5mx 26-JUN-21 03.18.41.750 PM
069t64t6ws5mx 26-JUN-21 03.18.51.991 PM
069t64t6ws5mx 26-JUN-21 03.19.02.229 PM
069t64t6ws5mx 26-JUN-21 03.19.12.470 PM
069t64t6ws5mx 26-JUN-21 03.19.22.709 PM
069t64t6ws5mx 26-JUN-21 03.19.32.949 PM
069t64t6ws5mx 26-JUN-21 03.19.43.188 PM
069t64t6ws5mx 26-JUN-21 03.19.53.493 PM
069t64t6ws5mx 26-JUN-21 03.20.03.733 PM
069t64t6ws5mx 26-JUN-21 03.20.14.102 PM
069t64t6ws5mx 26-JUN-21 03.20.24.342 PM

18 rows selected.

SQL> select * from table(dbms_xplan.display_awr('069t64t6ws5mx'));

no rows selected

As you can see, as long as the next AWR snapshot has not been taken, the corresponding execution plan is not yet in AWR:

select 
   snap_id,
   to_char(begin_interval_time, 'dd/mm/yyyy hh24:mi:ss') begin
  ,to_char(end_interval_time, 'dd/mm/yyyy hh24:mi:ss') end
from
   dba_hist_snapshot
order by 1 desc
fetch first 5 rows only;

  SNAP_ID BEGIN               END
---------- ------------------- -------------------
       392 26/06/2021 13:58:56 26/06/2021 15:00:07
       391 26/06/2021 13:00:33 26/06/2021 13:58:56
       390 26/06/2021 12:00:48 26/06/2021 13:00:33
       389 26/06/2021 11:01:00 26/06/2021 12:00:48
       388 26/06/2021 10:00:11 26/06/2021 11:01:00

Let’s then force a snapshot so that the execution plan will be dumped into AWR

SQL> EXECUTE DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT();

PL/SQL procedure successfully completed.

select 
   snap_id,
   to_char(begin_interval_time, 'dd/mm/yyyy hh24:mi:ss') begin
  ,to_char(end_interval_time, 'dd/mm/yyyy hh24:mi:ss') end
from
   dba_hist_snapshot
order by 1 desc
fetch first 5 rows only;

	   SNAP_ID BEGIN               END
---------- ------------------- -------------------      
       393 26/06/2021 15:00:07 26/06/2021 15:46:40
       392 26/06/2021 13:58:56 26/06/2021 15:00:07
       391 26/06/2021 13:00:33 26/06/2021 13:58:56
       390 26/06/2021 12:00:48 26/06/2021 13:00:33
       389 26/06/2021 11:01:00 26/06/2021 12:00:48

SQL> select * from table(dbms_xplan.display_awr('069t64t6ws5mx'));

SQL_ID 069t64t6ws5mx
--------------------
--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |       |       |   102M(100)|          |
|   1 |  NESTED LOOPS SEMI           |       |   100 | 13100 |   102M  (1)| 01:06:26 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1    |  1000K|   115M| 20332   (1)| 00:00:01 |
|   3 |    INDEX FULL SCAN           | T1_N1 |  1000K|       |  2779   (1)| 00:00:01 |
|   4 |   TABLE ACCESS BY INDEX ROWID| T2    |     1 |    10 |   102   (0)| 00:00:01 |
|   5 |    INDEX RANGE SCAN          | T2_I1 |   100 |       |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

If I repeat the same experiment but, this time, I will flush the sql_id just before taking a new AWR sample you will see, then, that there will be no execution plan dumped into AWR

SELECT *
   from t1 
 where id
   in (select id from t2 where x1 = 17335)
ORDER BY id;	

SELECT    
    sample_time
   ,is_awr_sample
   ,COUNT(1)
FROM
    gv$active_session_history
WHERE
    sql_id = '65gw9y7cw56k1'
AND  is_awr_sample = 'Y'   
GROUP BY
    sample_time
    ,is_awr_sample
order by sample_time ;

SAMPLE_TIME                    I   COUNT(1)
------------------------------ - ----------
27-JUN-21 09.01.47.733 AM      Y          1
27-JUN-21 09.01.57.973 AM      Y          1
27-JUN-21 09.02.08.277 AM      Y          1
27-JUN-21 09.02.18.518 AM      Y          1
27-JUN-21 09.02.28.757 AM      Y          1
27-JUN-21 09.02.39.026 AM      Y          1
27-JUN-21 09.02.49.301 AM      Y          1
27-JUN-21 09.02.59.609 AM      Y          1

8 rows selected.

SELECT
    sql_id,
    sample_time
FROM
    dba_hist_active_sess_history
WHERE
    sql_id = '65gw9y7cw56k1'
ORDER BY 2;

no rows selected ----> still no rows dumped into AWR

SELECT    
    sample_time
   ,is_awr_sample
   ,COUNT(1)
FROM
    gv$active_session_history
WHERE
    sql_id = '65gw9y7cw56k1'
AND  is_awr_sample = 'Y'   
GROUP BY
    sample_time
    ,is_awr_sample
order by sample_time;

SAMPLE_TIME                 I   COUNT(1)
--------------------------- - ----------
27-JUN-21 09.01.47.733 AM   Y          1
27-JUN-21 09.01.57.973 AM   Y          1
27-JUN-21 09.02.08.277 AM   Y          1
27-JUN-21 09.02.18.518 AM   Y          1
27-JUN-21 09.02.28.757 AM   Y          1
27-JUN-21 09.02.39.026 AM   Y          1
27-JUN-21 09.02.49.301 AM   Y          1
27-JUN-21 09.02.59.609 AM   Y          1
27-JUN-21 09.03.09.846 AM   Y          1
27-JUN-21 09.03.20.085 AM   Y          1
27-JUN-21 09.03.30.454 AM   Y          1
27-JUN-21 09.03.40.693 AM   Y          1
27-JUN-21 09.03.50.997 AM   Y          1
27-JUN-21 09.04.01.365 AM   Y          1
27-JUN-21 09.04.11.606 AM   Y          1

15 rows selected.

SELECT
    sql_id,
    sample_time
FROM
    dba_hist_active_sess_history
WHERE
    sql_id = '65gw9y7cw56k1'
ORDER BY 2;

no rows selected  ----> still no rows dumped into AWR

SELECT    
    sample_time
   ,is_awr_sample
   ,COUNT(1)
FROM
    gv$active_session_history
WHERE
    sql_id = '65gw9y7cw56k1'
AND  is_awr_sample = 'Y'   
GROUP BY
    sample_time
    ,is_awr_sample
order by sample_time ;

SAMPLE_TIME                    I   COUNT(1)
------------------------------ - ----------
27-JUN-21 09.01.47.733 AM      Y          1
27-JUN-21 09.01.57.973 AM      Y          1
27-JUN-21 09.02.08.277 AM      Y          1
27-JUN-21 09.02.18.518 AM      Y          1
27-JUN-21 09.02.28.757 AM      Y          1
27-JUN-21 09.02.39.026 AM      Y          1
27-JUN-21 09.02.49.301 AM      Y          1
27-JUN-21 09.02.59.609 AM      Y          1
27-JUN-21 09.03.09.846 AM      Y          1
27-JUN-21 09.03.20.085 AM      Y          1
27-JUN-21 09.03.30.454 AM      Y          1
27-JUN-21 09.03.40.693 AM      Y          1
27-JUN-21 09.03.50.997 AM      Y          1
27-JUN-21 09.04.01.365 AM      Y          1
27-JUN-21 09.04.11.606 AM      Y          1
27-JUN-21 09.04.21.846 AM      Y          1
27-JUN-21 09.04.32.086 AM      Y          1
27-JUN-21 09.04.42.326 AM      Y          1

18 rows selected.

SELECT
    sql_id,
    sample_time
FROM
    dba_hist_active_sess_history
WHERE
    sql_id = '65gw9y7cw56k1'
ORDER BY 2;

SQL_ID        SAMPLE_TIME
------------- ---------------------------
65gw9y7cw56k1 27-JUN-21 09.01.47.733 AM
65gw9y7cw56k1 27-JUN-21 09.01.57.973 AM
65gw9y7cw56k1 27-JUN-21 09.02.08.277 AM
65gw9y7cw56k1 27-JUN-21 09.02.18.518 AM
65gw9y7cw56k1 27-JUN-21 09.02.28.757 AM
65gw9y7cw56k1 27-JUN-21 09.02.39.026 AM
65gw9y7cw56k1 27-JUN-21 09.02.49.301 AM
65gw9y7cw56k1 27-JUN-21 09.02.59.609 AM
65gw9y7cw56k1 27-JUN-21 09.03.09.846 AM
65gw9y7cw56k1 27-JUN-21 09.03.20.085 AM
65gw9y7cw56k1 27-JUN-21 09.03.30.454 AM
65gw9y7cw56k1 27-JUN-21 09.03.40.693 AM
65gw9y7cw56k1 27-JUN-21 09.03.50.997 AM
65gw9y7cw56k1 27-JUN-21 09.04.01.365 AM
65gw9y7cw56k1 27-JUN-21 09.04.11.606 AM
65gw9y7cw56k1 27-JUN-21 09.04.21.846 AM
65gw9y7cw56k1 27-JUN-21 09.04.32.086 AM
65gw9y7cw56k1 27-JUN-21 09.04.42.326 AM

18 rows selected. ----> all ASH rows dumped into AWR

select * from table(dbms_xplan.display_awr('65gw9y7cw56k1'));

no rows selected ----> but there is no execution plan into AWR

As you can see there is no execution plan yet into AWR. Normally, as demonstrated above, it will be dumped at AWR sample time (next hour). Unless this sql_id is flushed from memory:

SQL> select * from table(dbms_xplan.display_cursor('65gw9y7cw56k1'));

--------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes |
--------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |       |       |
|   1 |  NESTED LOOPS SEMI           |       |   100 | 13100 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1    |  1000K|   115M|
|   3 |    INDEX FULL SCAN           | T1_N1 |  1000K|       |
|*  4 |   TABLE ACCESS BY INDEX ROWID| T2    |     1 |    10 |
|*  5 |    INDEX RANGE SCAN          | T2_I1 |   100 |       |
--------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("ID"="ID")
   5 - access("X1"=17335)
   
   
SELECT
     p.sql_id
    ,p.plan_hash_value
    ,p.child_number
	,p.last_load_time
    ,p.executions    
 FROM   
    gv$sql p
where
    p.sql_id = '65gw9y7cw56k1'
and    
  p.is_shareable ='Y';
  
SQL_ID        PLAN_HASH_VALUE CHILD_NUMBER LAST_LOAD_TIME      EXECUTIONS
------------- --------------- ------------ ------------------- ----------
65gw9y7cw56k1      1518369540            0 2021-06-27/09:01:40          1  


select address, hash_value from v$sqlarea where sql_id='65gw9y7cw56k1';

ADDRESS          HASH_VALUE
---------------- ----------
000000006754FB80 3653409345


SQL> exec dbms_shared_pool.purge ('000000006754FB80, 3653409345', 'C');

PL/SQL procedure successfully completed.

SELECT
     p.sql_id
    ,p.plan_hash_value
    ,p.child_number
	,p.last_load_time
    ,p.executions    
 FROM   
    gv$sql p
where
    p.sql_id = '65gw9y7cw56k1';
	

no rows selected ----> cursor is now purged from memory

Now that I have flushed the sql_id from memory, the next AWR snapshot will not be able to dump its execution plan (and cie) as the following proves:

select
   snap_id,
   to_char(begin_interval_time, 'dd/mm/yyyy hh24:mi:ss') begin
  ,to_char(end_interval_time, 'dd/mm/yyyy hh24:mi:ss') end
from
   dba_hist_snapshot
order by 1 desc
fetch first 5 rows only;

  SNAP_ID BEGIN               END
---------- ------------------- -------------------
       398 27/06/2021 08:02:48 27/06/2021 09:00:32
       397 26/06/2021 19:00:18 27/06/2021 08:02:48
       396 26/06/2021 18:00:31 26/06/2021 19:00:18
       395 26/06/2021 17:00:43 26/06/2021 18:00:31
       394 26/06/2021 15:46:40 26/06/2021 17:00:43
	   
	   
SQL> EXECUTE DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT();

PL/SQL procedure successfully completed.

select
   snap_id,
   to_char(begin_interval_time, 'dd/mm/yyyy hh24:mi:ss') begin
  ,to_char(end_interval_time, 'dd/mm/yyyy hh24:mi:ss') end
from
   dba_hist_snapshot
order by 1 desc
fetch first 5 rows only;

   SNAP_ID BEGIN               END
---------- ------------------- -------------------
       399 27/06/2021 09:00:32 27/06/2021 09:17:52
       398 27/06/2021 08:02:48 27/06/2021 09:00:32
       397 26/06/2021 19:00:18 27/06/2021 08:02:48
       396 26/06/2021 18:00:31 26/06/2021 19:00:18
       395 26/06/2021 17:00:43 26/06/2021 18:00:31	   
	   
SQL> select * from table(dbms_xplan.display_awr('65gw9y7cw56k1'));

no rows selected

SQL> select count(1) from dba_hist_active_sess_history where sql_id = '65gw9y7cw56k1';

  COUNT(1)
----------
        18

In conclusion, because information collected in the performance dynamic views is dumped into AWR at sample time, don’t be surprised when you will not find an execution plan of a cursor despite this cursor is present in dba_hist_active_sess_history. This is probably because this cursor has been flushed out from memory before the AWR sample time.

June 9, 2018

ASH, SQL Monitor and Extended trace

Filed under: ASH,SQL Monitor — hourim @ 3:21 pm

Without any analogy with Sergio Leone’s famous movie “The Good, the Bad and the Ugly” which has brought up my teenage years, I would rather define the above three Oracle features as “The Good, the Good and the Good” as they made my consultancy days very often. This article has been inspired by the last Nenad Noveljic blog post about negative unaccounted-for where he mentioned that the SQL Monitor Global Stats section might show a wrong value in the Elapsed time. In Nenad’s case this misleading piece of information was due to unaccounted-for time. In fact, I know three other situations where this time discrepancy might occur: when fetching data, during a long hard parsing query and when using parallel execution. In this article I will start by demonstrating the fetching data case. Two other articles for the parse and parallel cases will follow to complete the series.

Fetching data

I executed a query which takes few minutes to complete but spent a considerable amount of time fetching a large number of columns with quite a large number of rows. While the query was running I took several snapshots of its corresponding 12cR2 SQL monitor which I am presenting below:

Global Information
------------------------------
 Status              :  EXECUTING             
 Instance ID         :  1                     
 SQL ID              :  g6px76dmjv1jy                    
 Execution Started   :  06/05/2018 09:54:19   
 First Refresh Time  :  06/05/2018 09:54:25  
 Last Refresh Time   :  06/05/2018 09:54:29   
 Duration            :  11s                  
                 
Global Stats
=========================================
| Elapsed |   Cpu   |  IO      | Buffer |
| Time(s) | Time(s) | Waits(s) |  Gets  |
=========================================
|    10   |    8.36 |     1.34 |   2M   |
=========================================

SQL Plan Monitoring Details (Plan Hash Value=2278376452)
===========================================================================
| Id    |                 Operation |       Name       | Execs |   Rows   |
|       |                           |                  |       | (Actual) |
===========================================================================
|     0 | SELECT STATEMENT          |                  |     1 |          |
|     1 |   NESTED LOOPS OUTER      |                  |     1 |          |
|     2 |    NESTED LOOPS           |                  |     1 |          |
|     3 |     FILTER                |                  |     1 |          |

As you can see, both Elapsed time and Duration are showing the same time of 10 seconds. But spot as well that at this stage of the query execution, data was not yet sent to the client (Actual Rows column is empty).

At exactly the same moment, ASH is also showing approximately the same elapsed time as shown below:

column  cnt   format 9999
compute sum label 'Total Elapsed' of cnt on report

break   on report

select 
   sql_plan_line_id
  ,count(1) cnt
from
   gv$active_session_history
 where
    sample_time between to_date('&datefrom', 'mm/dd/yyyy hh24:mi:ss')
				and     to_date('&dateto', 'mm/dd/yyyy hh24:mi:ss')
and sql_id = '&sql_id'
group by sql_plan_line_id
order by 2 desc;

SQL> @ashplanlineId
Enter value for datefrom:  06/05/2018 09:54:19
Enter value for dateto:  06/05/2018 09:54:29
Enter value for sql_id: g6px76dmjv1jy


SQL_PLAN_LINE_ID   CNT
---------------- -----
              10    7
              13    1
              11    1
                 -----
Total Elapsed       9

However, as soon as Oracle has started sending data to the client, a discrepancy between Elapsed time in Global Stats section and Duration began to be displayed as shown below:

Global Information
------------------------------
 Status              :  EXECUTING             
 Instance ID         :  1                     
 SQL ID              :  g6px76dmjv1jy                    
 Execution Started   :  06/05/2018 09:54:19   
 First Refresh Time  :  06/05/2018 09:54:25  
 Last Refresh Time   :  06/05/2018 09:56:18   
 Duration            :  120s                  
 Fetch Calls         :  375            
    
Global Stats
=================================================
| Elapsed |   Cpu   |  IO      | Fetch | Buffer |
| Time(s) | Time(s) | Waits(s) | Calls |  Gets  |
=================================================
|    32   |    11   |     21   |   375 |   3M   |
=================================================

SQL Plan Monitoring Details (Plan Hash Value=2278376452)
===========================================================================
| Id    |                 Operation |       Name       | Start |   Rows   |
|       |                           |                  | Active| (Actual) |
===========================================================================
| ->  0 | SELECT STATEMENT          |                  |   +30 |     5611 |
| ->  1 |   NESTED LOOPS OUTER      |                  |   +30 |     5611 |
| ->  2 |    NESTED LOOPS           |                  |   +30 |     5611 |
| ->  3 |     FILTER                |                  |   +30 |          |

ASH, in accordance with SQL monitor Elapsed time, started also to report a wrong elapsed time as shown below:

SQL> @ashplanlineId
Enter value for datefrom:  06/05/2018 09:54:19
Enter value for dateto:  06/05/2018 09:56:18
Enter value for sql_id: g6px76dmjv1jy


SQL_PLAN_LINE_ID   CNT
---------------- -----
              13    25
              10     7
              4      1
              4      1
                 -----
Total Elapsed       34

And so on until the query has finished fetching the entire set of data in about 527 seconds while both ASH and Elapsed time of the SQL monitor section are showing roughly 34 seconds of elapsed time as shown below respectively:

Global Information
------------------------------
 Status              :  DONE (ALL ROWS)             
 Instance ID         :  1                     
 SQL ID              :  g6px76dmjv1jy                    
 Execution Started   :  06/05/2018 09:54:19   
 First Refresh Time  :  06/05/2018 09:54:25  
 Last Refresh Time   :  06/05/2018 10:03:06   
 Duration            :  527s                  
 Fetch Calls         :  2085            
    
Global Stats
=================================================
| Elapsed |   Cpu   |  IO      | Fetch | Buffer |
| Time(s) | Time(s) | Waits(s) | Calls |  Gets  |
=================================================
|    34   |    12   |     22   |  2085 |   3M   |
=================================================

SQL> @ashplanlineId
Enter value for datefrom:  06/05/2018 09:54:19
Enter value for dateto:  06/05/2018 09:56:18
Enter value for sql_id: g6px76dmjv1jy


SQL_PLAN_LINE_ID   CNT
---------------- -----
              13    26
              10     7
               4     1
              11     1
               5     1
                 -----
Total Elapsed       36

Using Nenad query, I can also report the same wrong elapsed time from ASH.

column  event format a30
column  cnt   format 9999
compute sum label 'Total Elapsed' of cnt on report

break   on report
select 
   session_state, event
  ,count(1) cnt
from
   gv$active_session_history
 where
   sql_id = '&sql_id'
 and sql_exec_id = (select max(sql_exec_id) 
                    from gv$active_session_history
					where sql_id = '&sql_id'
					)
group by event, session_state
;
SESSION EVENT                            CNT
------- ------------------------------ -----
WAITING direct path read                  26
ON CPU                                    11
                                       -----
Total Elapsed                             37

TKPROF

In a second attempt to make this time difference crystal clear I traced another execution of the same query using the 10046 trace events. I reproduced below the corresponding SQL monitor report and TKPROF summary respectively:

Global Information
------------------------------
 Status              :  DONE (ALL ROWS)             
 Instance ID         :  1                     
 SQL ID              :  g6px76dmjv1jy                    
 Execution Started   :  06/05/2018 09:27:23   
 First Refresh Time  :  06/05/2018 09:27:30  
 Last Refresh Time   :  06/05/2018 09:39:21   
 Duration            :  718s                  
 Fetch Calls         :  2085            
    
Global Stats
=================================================
| Elapsed |   Cpu   |  IO      | Fetch | Buffer |
| Time(s) | Time(s) | Waits(s) | Calls |  Gets  |
=================================================
|    198  |    15   |     183  |  2085 |   3M   |
=================================================


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.21       0.21          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch     2085     15.18     198.03     647106    2918455          0       31246
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     2087     15.39     198.24     647106    2918455          0       31246


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                    2085        0.00          0.00
  SQL*Net more data to client                     2        0.00          0.00
  Disk file operations I/O                       90        0.00          0.00
  db file sequential read                     30313        0.09        158.42
  reliable message                                1        0.00          0.00  
  enq: KO – fast object checkpoint                1        0.00          0.00
  direct path read                             1220        0.09         24.76
  SQL*Net message from client                  2085        0.93        519.45
********************************************************************************

As you can see, the Duration of the SQL monitor is showing that the query took 718 seconds while the Elapsed time is showing only 198 seconds (15 sec for CPU and 183 sec for IO waits).
However, the TKPROF file comforts the reliability of the SQL monitor Duration information since it shows that the total elapsed time of the query is:

158.42 + 24.76     = 183.18 seconds for I/O waits
158.42 + 519.45    = 702.63 seconds for I/O waits and  SQL*Net message from client
702.63 + 15.39(cpu)= 718.02 seconds 

Naturally, ASH can’t display the correct piece of information since a large part of the query has been spent on the idle SQL*Net message from client wait event:

SQL> @ashplanlineId
Enter value for datefrom:  06/05/2018 09:27:23
Enter value for dateto:  06/05/2018 09:39:21
Enter value for sql_id: g6px76dmjv1jy

SQL_PLAN_LINE_ID   CNT
---------------- -----
              12   153
              13    28
              10     9
              11     6
              14     1
                 -----
Total Elapsed      197

Summary

We have clearly shown that when a query is fetching a large amount of data to the client, this transfer time is not taken into account by the Elapsed time of the SQL monitor Global Stats section. This is why, as mentioned by Nenad in his article, you should always rely on the time reported by the Duration in the Global information of the SQL Monitor instead of the Elapsed time of the Global stats section.

September 28, 2016

AWR and superfluous historical statistics

Filed under: ASH — hourim @ 6:11 pm

An active sql_id is subject to a capture and a load into ASH (gv$active_session_history). As long as this sql_id is present in ASH there is a 1/10 chance for it to be captured into AWR persisted historical tables as well. In addition to the ASH gv$active_session_history and its AWR alter-ego dba_hist_active_sess_history table, performance and tuning specialists are extensively using the dba_hist_sqlstat table into which snapshots of gv$sql are periodically sent by the MMON Lite process via the SGA-ASH circular buffer.

One of the queries I am using against this table is the following one which I am referring to it as histstats.sql (I think I have originally hijacked from the internet):

SELECT 
sn.snap_id,
plan_hash_value,
st.sql_profile,
executions_delta execs,
TRUNC(elapsed_time_delta/1e6/DECODE(executions_delta, 0, 1, executions_delta)) avg_etime,
  ROUND(disk_reads_delta    /DECODE(executions_delta,0,1, executions_delta),1) avg_pio,
  ROUND(buffer_gets_delta   /DECODE(executions_delta,0,1, executions_delta), 1) avg_lio ,
  ROUND(rows_processed_delta/DECODE(executions_delta,0, 1, executions_delta), 1) avg_rows
FROM 
  dba_hist_sqlstat st,
  dba_hist_snapshot sn
WHERE st.snap_id        = sn.snap_id
AND sql_id              = '&sql_id'
AND begin_interval_time >= to_date('&date','ddmmyyyy')
ORDER BY 1 ASC;

That’s said how would you read and interpret the following output of the above query taken from a running system for a particular sql_id?

SNAP_ID     PLAN_HASH_VALUE SQL_PROFILE                     EXECS AVG_ETIME AVG_LIO 
----------  --------------- ------------------------------ ----- ---------- ----------      
30838        726354567                                      0       7227    3945460      
30838        726354567                                      0       7227    3945460      
30839       4211030025      prf_2yhzvghb06vh4_4211030025    1          3      28715      
30839       4211030025      prf_2yhzvghb06vh4_4211030025    1          3      28715      
30839        726354567                                      0       7140    5219336      
30839        726354567                                      0       7140    5219336 
30840        726354567                                      0       7203    9389840      
30840        726354567                                      0       7203    9389840      
30840       4211030025      prf_2yhzvghb06vh4_4211030025    0       2817    7831649      
30840       4211030025      prf_2yhzvghb06vh4_4211030025    0       2817    7831649      
30841        726354567                                      0       7192    5200201      
30841        726354567                                      0       7192    5200201      
30841       4211030025      prf_2yhzvghb06vh4_4211030025    0          0          0            
30841       4211030025      prf_2yhzvghb06vh4_4211030025    0          0          0            
30842       4211030025      prf_2yhzvghb06vh4_4211030025    0          0          0      
30842       4211030025      prf_2yhzvghb06vh4_4211030025    0          0          0      
30842        726354567                                      0       4956    3183667      
30842        726354567                                      0       4956    3183667  

Or, to make things less confused, how would you interpret the different rows of snapshot 30841 reproduced below?

SNAP_ID     PLAN_HASH_VALUE SQL_PROFILE                     EXECS AVG_ETIME AVG_LIO 
----------  --------------- ------------------------------ ----- ---------- ----------      
30841        726354567                                      0       7192    5200201      
30841        726354567                                      0       7192    5200201      
30841       4211030025      prf_2yhzvghb06vh4_4211030025    0          0          0            
30841       4211030025      prf_2yhzvghb06vh4_4211030025    0          0          0            

What do those zeros values in the line with a not-null SQL Profile actually mean?

The answer to this question resides in the way Oracle dumps statistics of a SQL statement from standard memory into AWR tables. When an active sql_id having multiple child cursors in gv$sql, the MMON Lite process will (if the sql_id qualify for the capture of course) dump into the dba_hist_sqlstat view statistics of all corresponding distinct child cursors including those not used by the active sql_id at the capture time (even the statistics of non-shareable cursors are dumped into this view provided these cursors are present in gv$sql at the capture moment).

For example in the above list you can see that we have 4 execution statistics at snapshot 30841 of which the two first ones are actually using effectively the plan_hash_value 726354567. According to their elapsed time (7192) these executions did span multiple snapshots (this is a serial execution plan by the way). But the remaining lines, those showing a SQL Profile, are in fact superfluous and confusing. They have been captured only because they correspond to the presence of a second shareable child cursor (plan_hash_value 4211030025) in gv$sql at the moment of the capture (snapshot 30841). One of the indication that they are superfluous is the zeros statistics in their average logical I/O, physical I/O, executions and number of rows processed.

Now that the scene has been set we need a reproducible example. For that purpose any SQL statement having multiple child cursors in gv$sql will do the trick. An easy way of engineering such a case is to use Adaptive Cursor Sharing (in 12.1.0.2.0) among other different scenarios:

The model

INSERT INTO t1
SELECT level n1 ,
  CASE
    WHEN level = 1
    THEN 1
    WHEN level > 1
    AND level <= 101
    THEN 100
    WHEN level > 101
    AND level <= 1101
    THEN 1000
    WHEN level > 10001
    AND level <= 11000
    THEN 10000
    ELSE 1000000
  END n2
FROM dual
  CONNECT BY level < 1200150;
  
CREATE INDEX t1_i1 ON t1(n2);

BEGIN
  dbms_stats.gather_table_stats 
         (user 
         ,'t1' 
         ,method_opt => 'for all columns size auto'
         ,cascade => true 
         ,estimate_percent => dbms_stats.auto_sample_size 
         );
END;
/ 

Generating multiple child cursors using ACS

var ln2 number;
exec :ln2 := 100;

SQL> select count(1) FROm t1 WHERE n2 <= :ln2;

  COUNT(1)
----------
       101

exec :ln2 := 1000000
SQL> select count(1) FROm t1 WHERE n2 <= :ln2;

  COUNT(1)
----------
   1200149

SQL_ID  9sp9wvczrvpty, child number 0
-------------------------------------
select count(1) FROm t1 WHERE n2 <= :ln2

Plan hash value: 2432955788

---------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes |
---------------------------------------------------
|   0 | SELECT STATEMENT  |       |       |       |
|   1 |  SORT AGGREGATE   |       |     1 |     3 |
|*  2 |   INDEX RANGE SCAN| T1_I1 |   218 |   654 |
---------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("N2"<=:LN2)

In order for the above sql_id to be bind aware and get a new optimal execution plan at each execution, it needs to be executed once again (more details can be found here):

SQL> select count(1) FROm t1 WHERE n2 <= :ln2;

  COUNT(1)
----------
   1200149

SQL_ID  9sp9wvczrvpty, child number 1
-------------------------------------
select count(1) FROm t1 WHERE n2 <= :ln2

Plan hash value: 3724264953
----------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |
----------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |
|   1 |  SORT AGGREGATE    |      |     1 |     3 |
|*  2 |   TABLE ACCESS FULL| T1   |  1191K|  3489K|
----------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("N2"<=:LN2)

Notice now that a new child cursor n°1 has been generated while the existing child cursor n° 0 switched to a non-shareable status. In order to wake it up, we need to run again the same query at the corresponding bind variable value:

exec :ln2 := 100;

select count(1) FROm t1 WHERE n2 <= :ln2;

SQL_ID  9sp9wvczrvpty, child number 2
-------------------------------------
select count(1) FROm t1 WHERE n2 <= :ln2

Plan hash value: 2432955788
---------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes |
---------------------------------------------------
|   0 | SELECT STATEMENT  |       |       |       |
|   1 |  SORT AGGREGATE   |       |     1 |     3 |
|*  2 |   INDEX RANGE SCAN| T1_I1 |   218 |   654 |
---------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 – access("N2"<=:LN2)

Finally, we have achieved the initial goal of having multiple execution plans in gv$sql for the above SQL statement:

SELECT 
  sql_id ,
  child_number ,
  first_load_time ,
  last_load_time ,
  plan_hash_value ,
  executions execs ,
  rows_processed ,
  trunc(buffer_gets/executions) lio ,
  is_shareable
FROM gv$sql
WHERE sql_id = '9sp9wvczrvpty' 
--and is_shareable = 'Y'    
order by last_load_time desc; 

SQL_ID        CHILD_NUMBER FIRST_LOAD_TIME     LAST_LOAD_TIME      PLAN_HASH_VALUE  EXECS ROWS_PROCESSED        LIO I
------------- ------------ ------------------- ------------------- --------------- ------ -------------- ---------- -
9sp9wvczrvpty            0 2016-09-28/07:19:21 2016-09-28/07:19:21      2432955788      2              2       1225 N
9sp9wvczrvpty            1 2016-09-28/07:19:21 2016-09-28/07:20:44      3724264953      1              1       2222 Y
9sp9wvczrvpty            2 2016-09-28/07:19:21 2016-09-28/07:22:03      2432955788      1              1          3 Y

Notice that the sql_id (9sp9wvczrvpty) has three child cursors of which child cursor 0 is not shareable anymore. There is still nothing sent into AWR table as shown below:

SQL>  @histstats
Enter value for sql_id: 9sp9wvczrvpty
Enter value for date: 27092016

no rows selected

Forcing AWR to dump SQL statistics
In order to make Oracle dumping statistics of this SQL statement we are going to force a manual AWR snapshot capture by means of the following package call:

SQL> exec dbms_workload_repository.create_snapshot;  

The call will create a snapshot and dump the above content of gv$sql into dba_hist_sqlstat table as shown below:

SQL>  @histstats
Enter value for sql_id: 9sp9wvczrvpty
Enter value for date: 27092016

   SNAP_ID SQL_ID        PLAN_HASH_VALUE SQL_PROFILE                     EXECS  AVG_ETIME    AVG_LIO  AVG_PIO  AVG_ROWS
---------- ------------- --------------- ------------------------------ ------ ---------- ---------- -------- ---------
      4305 9sp9wvczrvpty      2432955788                                     3          0        817        0         1
      4305 9sp9wvczrvpty      3724264953                                     1          0       2222        0         1

That what was clearly expected: Oracle has captured into dba_hist_sqlstat table 4 executions at plan_hash_value 2432955788 and 1 execution at plan_hash_value 3724264953.

Now that we have multiple child cursors present in gv$sql, if AWR is to dump the sql_id it will then dump all its child cursor even those not used during the captured time. For example if I run again the same query with bind variable (ln2= 1000000) Oracle will use the plan with plan_hash_value 3724264953; but it will, nevertheless, dump statistics of the other non-used plan_hash_value into dba_hist_sqlstat as well.

exec :ln2 := 1000000

select count(1) FROm t1 WHERE n2 <= :ln2;

exec dbms_workload_repository.create_snapshot;

SQL> @histstats
Enter value for sql_id: 9sp9wvczrvpty
Enter value for date: 27092016

SNAP_ID SQL_ID        PLAN_HASH_VALUE SQL_PROFILE    EXECS  AVG_ETIME    AVG_LIO  AVG_PIO  AVG_ROWS
----- ------------- --------------- ------------- ------ ---------- ---------- -------- ---------
 4305 9sp9wvczrvpty      2432955788                    3          0        817        0         1
 4305 9sp9wvczrvpty      3724264953                    1          0       2222        0         1
 
 4306 9sp9wvczrvpty      2432955788                    0          0          0        0         0
 4306 9sp9wvczrvpty      3724264953                    1          0       2222        0         1

That’s it. Notice how, as expected, Oracle has dumped two rows for the new snapshot 4306 of which the first one with plan_hash_value (2432955788) is superfluous and has been captured in AWR tables only because it was present in gv$sql at the snapshot capture time.

Now that we know that Oracle is dumping all plan_hash_value present in gv$sql at the snapshot capture time even those not used during this snapshot, instead of having the superfluous lines appearing in the historical execution statistics I have added a minor where clause to the above extensively used script to obtain this:

SELECT
 *
FROM
 (SELECT
   sn.snap_id
   ,plan_hash_value
   ,sql_profile
   ,executions_delta execs
   ,trunc(elapsed_time_delta/1e6/decode(executions_delta, 0, 1, executions_delta)) avg_etime
   ,round(disk_reads_delta/decode(executions_delta,0,1, executions_delta),1) avg_pio
   ,round(buffer_gets_delta/decode(executions_delta,0,1, executions_delta), 1) avg_lio
   ,round(px_servers_execs_delta/decode(executions_delta,0,1, executions_delta), 1) avg_px
   ,round(rows_processed_delta/decode(executions_delta,0, 1, executions_delta), 1) avg_rows
  FROM
     dba_hist_sqlstat st,
     dba_hist_snapshot sn
  WHERE st.snap_id = sn.snap_id
  AND sql_id = '&sql_id'
  AND begin_interval_time > to_date('&from_date','ddmmyyyy')
 )
 WHERE avg_lio != 0 – added clause
 ORDER by 1 asc;
SQL> @histstats2
Enter value for sql_id: 9sp9wvczrvpty
Enter value for from_date: 27092016

SNAP_ID PLAN_HASH_VALUE SQL_PROFILE  EXECS  AVG_ETIME  AVG_PIO    AVG_LIO     AVG_PX  AVG_ROWS
------- --------------- ----------- ------ ---------- -------- ---------- ---------- ---------
   4305      2432955788                  3          0        0      817.7          0         1
   4305      3724264953                  1          0        0       2222          0         1

   4306      3724264953                  1          0        0       2222          0         1

The above added where clause excludes SQL statements with 0 logical I/O.

But wait a moment!!!

This might, at the same time, exclude vital information from the historical statistics. For example the following situation will not consume any logical I/O but can spent a lot time locked waiting for a resource to be released:

SQL-1 > select * from t1 where rownum <=  3 for update;

        N1         N2
---------- ----------
      1825    1000000
      1826    1000000
      1827    1000000

SQL-2> lock table t1 in exclusive mode;  

The second session will obviously hang. Wait a couple of seconds, go back to session , issue a rollback and watch out the new situation of session n°2

SQL-1> rollback;

SQL-2> lock table t1 in exclusive mode;

Table(s) Locked.

All what remains to do is to take a manual snapshot and check out how many logical I/O have been consumed by the SQL statement which has tried to lock the table in exclusive mode:

SQL> exec dbms_workload_repository.create_snapshot;

select sql_id
from gv$sql where sql_text like '%lock table t1%'
and sql_text not like '%v$sql%';

SQL_ID
-------------
a9nb52un8wqf4

SQL> @histstats
Enter value for sql_id: a9nb52un8wqf4
Enter value for date: 27092016

SNAP_ID SQL_ID        PLAN_HASH_VALUE SQL_PROFILE  EXECS  AVG_ETIME    AVG_LIO  AVG_PIO  AVG_ROWS
------- ------------- --------------- ----------- ------ ---------- ---------- -------- ---------
   4308 a9nb52un8wqf4               0                  1        159          0        0         0

That’s how a SQL can take a long time to complete while consuming 0 Logical I/O. This is why I have added a second extra where clause to the histstats2 script as shown below:

SELECT 
*
FROM
  (SELECT
    sn.snap_id ,
    plan_hash_value ,
    executions_delta execs ,
TRUNC(elapsed_time_delta/1e6/DECODE(executions_delta, 0, 1, executions_delta)) avg_etime ,
    ROUND(disk_reads_delta      /DECODE(executions_delta,0,1, executions_delta),1) avg_pio ,
    ROUND(buffer_gets_delta     /DECODE(executions_delta,0,1, executions_delta), 1) avg_lio ,
    ROUND(px_servers_execs_delta/DECODE(executions_delta,0,1, executions_delta), 1) avg_px ,
    ROUND(rows_processed_delta  /DECODE(executions_delta,0, 1, executions_delta), 1) avg_rows
  FROM 
    dba_hist_sqlstat st,
    dba_hist_snapshot sn
  WHERE st.snap_id        = sn.snap_id
  AND sql_id              = '&sql_id'
  AND begin_interval_time > to_date('&from_date','ddmmyyyy')
  )
WHERE avg_lio != 0 
     OR (avg_lio   =0 AND avg_etime > 0) 
 ORDER by 1 asc;  
     

This new script (histats3.sql), when applied to the two sql_id we have investigated in this post gives respectively:

SQL> @HistStats3
Enter value for sql_id: 9sp9wvczrvpty
Enter value for from_date: 27092016

   SNAP_ID PLAN_HASH_VALUE  EXECS  AVG_ETIME  AVG_PIO    AVG_LIO     AVG_PX  AVG_ROWS
---------- --------------- ------ ---------- -------- ---------- ---------- ---------
      4305      2432955788      3          0        0      817.7          0         1
      4305      3724264953      1          0        0       2222          0         1
      4306      3724264953      1          0        0       2222          0         1

SQL> @HistStats3
Enter value for sql_id: a9nb52un8wqf4
Enter value for from_date: 27092016

   SNAP_ID PLAN_HASH_VALUE  EXECS  AVG_ETIME  AVG_PIO    AVG_LIO     AVG_PX  AVG_ROWS
---------- --------------- ------ ---------- -------- ---------- ---------- ---------
      4308               0      1        159        0          0          0         0

It goes without saying that I will be very grateful to have feedback about the added where clause. I might have neglected a situation in which both Logical I/O and elapsed time are not greater than 0 and that are important to appear in the historical statistics.

Last but not least you might have already pointed out the apparition of the AVG_PX column in the above output? That will concern the next article about ASH-AWR statistics for parallel queries. Stay tuned I have a nice example from a running system to share with you and where this column reveals to be very helpful. I may also show a version of the script which includes the end_of_fetch_count column which reveals also to be a valuable information to exploit when trying to understand average elapsed time of queries coming from a web-service and where a time out has been implemented.

Create a free website or blog at WordPress.com.

Tony's Oracle Tips

Tony Hasler's light hearted approach to learning about Oracle

Richard Foote's Oracle Blog

Focusing Specifically On Oracle Indexes, Database Administration and Some Great Music

Hatem Mahmoud's blog

Just another blog : Databases, Linux and other stuffs

Mohamed Houri’s Oracle Notes

Qui se conçoit bien s’énonce clairement

Oracle Diagnostician

Performance troubleshooting as exact science

Raheel's Blog

Things I have learnt as Oracle DBA

Coskan's Approach to Oracle

What I learned about Oracle

So Many Oracle Manuals, So Little Time

“Books to the ceiling, Books to the sky, My pile of books is a mile high. How I love them! How I need them! I'll have a long beard by the time I read them”—Lobel, Arnold. Whiskers and Rhymes. William Morrow & Co, 1988.

Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Oracle Scratchpad

Just another Oracle weblog

OraStory

Dominic Brooks on Oracle Performance, Tuning, Data Quality & Sensible Design ... (Now with added Sets Appeal)