Mohamed Houri’s Oracle Notes

March 17, 2015

A precise index

Filed under: Oracle — hourim @ 7:50 pm

Below are a select statement not performing very well and its corresponding row source execution plan:

SQL> select
       {list of colums}
    from
      tx_tables bot
    inner join ty_tables_tmp tmp
     on account_id      = tmp.account_id
     and trade_id       = tmp.trd_id
     where transferred <> 1;
----------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                        | Starts | E-Rows | A-Rows | 
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                             |      1 |        |    301K|
|   1 |  NESTED LOOPS                |                             |      1 |        |    301K|
|   2 |   NESTED LOOPS               |                             |      1 |     75 |    301K|
|   3 |    TABLE ACCESS FULL         | TY_TABLES_TMP               |      1 |      2 |      2 |        
|*  4 |    INDEX RANGE SCAN          | TX_TABLES_IDX1              |      2 |  43025 |    301K|
|*  5 |   TABLE ACCESS BY INDEX ROWID| TX_TABLES                   |    301K|     38 |    301K|
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("TRADE_ID"="TMP"."TRD_ID")
   5 - filter(("BOT"."TRANSFERRED"<>1 AND "ACCOUNT_ID"="TMP"."ACCOUNT_ID"))

Statistiques
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     278595  consistent gets
          0  physical reads
          0  redo size
   10597671  bytes sent via SQL*Net to client
     221895  bytes received via SQL*Net from client
      20131  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     301944  rows processed

Let’s put aside the inadequate statistics (the CBO is estimating to get 43K rows at operation in line 4 while actually it has generated 301K) and let’s try to figure out if there is a way we can follow to avoid starting TX_TABLES TABLE ACCESS BY INDEX ROWID operation 301K times. The double NESTED LOOP (known as the 11g NLJ_BATCHING) is driving here an outer row source of 301K rows (NESTED LOOP operation at line 2) which starts henceforth the inner operation TABLE ACCESS BY INDEX ROWID 301K times (see the Starts column at line 5). If we get rid of the NESTED LOOP at line 1 we might then be able to reduce the number of times the operation at line 5 is started. And maybe we will also, as a consequence of this starts operation reduction, decrease the number of corresponding logical I/O. Annihilating the nlj_batching feature can be achieved by using the no_nlj_batching hint as shown below:

SQL> select
       /*+ no_nlj_batching(bot) */
       {list of colums}
     from
      tx_tables bot
    inner join ty_tables_tmptmp
     on account_id     = tmp.account_id
     and trade_id       = tmp.trd_id
     where transferred <> 1;
----------------------------------------------------------------------------------------------
| Id  | Operation                   | Name                        | Starts | E-Rows | A-Rows |  
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                             |      0 |        |      0 |
|*  1 |  TABLE ACCESS BY INDEX ROWID| TX_TABLES                   |      1 |     38 |    301K|
|   2 |   NESTED LOOPS              |                             |      1 |     75 |    301K|
|   3 |    TABLE ACCESS FULL        | TY_TABLES_TMP               |      1 |      2 |      2 |
|*  4 |    INDEX RANGE SCAN         | TX_TABLES_IDX1              |      2 |  43025 |    301K|
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(("BOT"."TRANSFERRED"<>1 AND
       "ACCOUNT_ID"="TMP"."ACCOUNT_ID"))
   4 - access("TRADE_ID"="TMP"."TRD_ID")
Note
-----
   - dynamic sampling used for this statement (level=2)

Statistiques
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     278595  consistent gets
          0  physical reads
          0  redo size
   10597671  bytes sent via SQL*Net to client
     221895  bytes received via SQL*Net from client
      20131  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     301944  rows processed

Although the Starts column is not showing anymore those 301K executions, the number of logical I/O is still exactly the same. The performance issue, in contrast to what I was initially thinking about, is not coming from the nlj_batching feature. As a next step I have decided that it is time to look carefully to this query from the indexes point of view. The above two execution plans have both made use of the TX_TABLES_IDX1 index defined as shown below:

TX_TABLES_IDX1(TRADE_ID, EXT_TRD_ID)       

There is still a room to create a precise index which might help in this case. This index might look like the following one:

SQL> create index TX_TABLES_FBI_IDX2
                 (TRADE_ID
                 ,ACCOUNT_ID
                 ,CASE WHEN TRANSFERRED <>1 THEN -1 ELSE NULL END
                 );

Which, once created, it has allowed the initial query (without any hint) to be honored with the following execution plan:

-----------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                         | Starts | E-Rows | A-Rows |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                              |      1 |        |    301K|
|   1 |  NESTED LOOPS                |                              |      1 |     75 |    301K|
|   2 |   TABLE ACCESS FULL          | TY_TABLES_TMP                |      1 |      2 |      2 |
|*  3 |   TABLE ACCESS BY INDEX ROWID| TX_TABLES                    |      2 |     38 |    301K|
|*  4 |    INDEX RANGE SCAN          | TX_TABLES_FBI_IDX2           |      2 |   4141 |    301K|
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("BOT"."TRANSFERRED"<>1)
   4 - access("TRADE_ID"="TMP"."TRD_ID" AND
       "ACCOUNT_ID"="TMP"."ACCOUNT_ID")
Note
-----
   - dynamic sampling used for this statement (level=2)

Statistiques
----------------------------------------------------------
        199  recursive calls
          0  db block gets
     108661  consistent gets
        229  physical reads
          0  redo size
    8394791  bytes sent via SQL*Net to client
     221895  bytes received via SQL*Net from client
      20131  SQL*Net roundtrips to/from client
          7  sorts (memory)
          0  sorts (disk)
     301944  rows processed

Spot how the new index has not only get rid of the nlj_batching double nested loop and reduced the number of operations Oracle has to start but it has also reduced the logical I/O consumption to 108K instead of the initial 278K. However, we still have not changed the predicate part of the query to match exactly the function based part of the new index (CASE WHEN TRANSFERRED <>1 THEN -1 ELSE NULL END) which explains why we still have a filter on the TX_TABLES operation at line 3. As always with function based indexes, you need to have the predicate part of the query matching the definition of the function based index expression. Which in other words translate to this new query (look at the last line of the query):

SQL> select
       {list of colums}
     from
       tx_tables bot
       inner join ty_tables_tmptmp
     on account_id     = tmp.account_id
     and trade_id       = tmp.trd_id
     where (case  when transferred <> 1 then -1 else null end)  = -1;

Here it is the new resulting executions plan:

-----------------------------------------------------------------------------------------------
| Id  | Operation                    | Name                         | Starts | E-Rows | A-Rows |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                              |      1 |        |    301K|
|   1 |  NESTED LOOPS                |                              |      1 |      1 |    301K|
|   2 |   TABLE ACCESS FULL          | TY_TABLES_TMP                |      1 |      1 |      2 |
|   3 |   TABLE ACCESS BY INDEX ROWID| TX_TABLES                    |      2 |     75 |    301K|
|*  4 |    INDEX RANGE SCAN          | TX_TABLES_FBI_IDX2           |      2 |   4141 |    301K|
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("TRADE_ID"="TMP"."TRD_ID"
               AND "ACCOUNT_ID"="TMP"."ACCOUNT_ID"
               AND "BOT"."SYS_NC00060$"=(-1))
Statistiques
----------------------------------------------------
          0  recursive calls
          0  db block gets
     108454  consistent gets
          0  physical reads
          0  redo size
    8394791  bytes sent via SQL*Net to client
     221895  bytes received via SQL*Net from client
      20131  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     301944  rows processed

Notice now that when I have matched the predicate part of the query with the function based index definition there is no filter anymore on the TX_TABLES table which, despite this time has not been of a noticeable effect, it might reveal to be a drastic enhancement.

Bottom Line: precise index can help the CBO following a better path .

March 16, 2015

Diagnosing the past

Filed under: Tuning — hourim @ 8:08 pm

This is a simple note about diagnosing a situation that happened in the past.

A running application suffered a delay in many of its multi-user insert statements blocked on an enq: TM-row lock contention.

Capture2

The on call DBA was just killing the culprit blocking session when I received an e-mail asking to investigate the root cause of this lock . As far as this was a task of diagnosing a very recent past, using v$active_session_history imposes itself:

select sql_id,event, count(1)
from v$active_session_history
where sample_time > to_date('05032015 10:00:00','ddmmrrrr hh24:mi:ss')
and   sample_time < to_date('05032015 11:00:00','ddmmrrrr hh24:mi:ss')
and event like '%TM%'
group by sql_id,event
order by 3 desc;

3fbwp7qdqxk9v     enq: TM - contention    1

Surprisingly there was only 1 recorded enq: TM-row lock contention wait event during the exact same period of blocked insert statements; and the corresponding sql_id has nothing to do with the blocked inserts.

I stumped few minutes looking bizarrely to the above select and started thinking about the basics which say:

  • v$active_session_history is a sample of all active sessions taken every second.
  • dba_hist_active_sess_history isa one-in-ten samples of v$active_session_history

I knew as well that v$active_session_history being an in-memory buffer the retention period is henceforth depending on the size of the buffer and the volume of active sessions. But, I was diagnosing almost a real time situation; so why there were no  “functional” enq: TM-row lock contention in v$active_session_history?

Finally, I decided to use the less precise one-in-ten sample table and this is what I’ve got:

select sql_id,event, count(1)
from dba_hist_active_sess_history
where sample_time > to_date('05032015 10:00:00','ddmmrrrr hh24:mi:ss')
and   sample_time < to_date('05032015 11:00:00','ddmmrrrr hh24:mi:ss')
and event like '%TM%'
group by sql_id,event
order by 3 desc;

53xthsbv8d7yk     enq: TM - contention    4878
1w95zpw2fy021     enq: TM - contention    340
35ghv3bugv22a     enq: TM - contention    264
8b9nqpzs24n0t     enq: TM - contention    163
aqdaq2ybqkrpa     enq: TM - contention    156
50gygyqsha3nr     enq: TM - contention    103
fzfvzhjg0p6y0     enq: TM - contention    82
bs359cfsq4fvc     enq: TM - contention    80
15xpc3um0c3a2     enq: TM - contention    58
d0rndrymh0b18     enq: TM - contention    49
864jbgkbpvcnf     enq: TM - contention    40
9cn21y7hbya46     enq: TM - contention    36
8419w8jnhfa3m     enq: TM - contention    33
f71jbkdy94pph     enq: TM - contention    5
2zpyy8wbnp5d0     enq: TM - contention    3
0d6gq7b9j522p     enq: TM - contention    2

Normally what we can see in dba_hist_active_sess_history has certainly travelled via v$active_session_history; and the more recent is the situation the more is the chance we have to find this situation mentioned in v$active_session_history. Why then I have not found what I was expecting? Before answering this simple question let me tell you few words about how I have explained this TM lock and the solution I have proposed to get rid of it.

TM-enqueue is almost always related to an unindexed foreign key. The session killed by the DBA was deleting from a parent table (parent_tab). The underlying child table (child_tab) was pointing to this “deleted” parent table via an unindexed foreign key which consequently has been locked. In the meantime, the application was inserting concurrently into other tables. Those inserts add child values coming from the above locked child_tab table (which becomes a parent table in the eyes of the insert statements) which Oracle has to check their existence in the locked “parent” child_tab table and hence the lock sensation reported by the end user during their insert statements. The solution was simply to index the foreign key in the child_tab.

Back to the reason that prompted me to write this note, why v$active_session_history is not showing the same very recent TM lock as the dba_hist_active_sess_history? The answer is simply because I forget that the application is running under RAC instance and I was pointing to the other node of the RAC so that when I issued the same select against gv$ the discrepancy between the two view ceases immediately as shown below:

SQL> select sql_id,event, count(1)
    from gv$active_session_history
    where sample_time > to_date('05032015 10:00:00','ddmmrrrr hh24:mi:ss')
    and   sample_time < to_date('05032015 11:00:00','ddmmrrrr hh24:mi:ss')
    and event like '%TM%'
    group by sql_id,event
    order by 3 desc;

SQL_ID        EVENT                       COUNT(1)
------------- --------------------------- ---------
53xthsbv8d7yk enq: TM - contention        48483
1w95zpw2fy021 enq: TM - contention        3370
35ghv3bugv22a enq: TM - contention        2635
8b9nqpzs24n0t enq: TM - contention        1660
aqdaq2ybqkrpa enq: TM - contention        1548
50gygyqsha3nr enq: TM - contention        1035
fzfvzhjg0p6y0 enq: TM - contention        821
bs359cfsq4fvc enq: TM - contention        801
15xpc3um0c3a2 enq: TM - contention        585
d0rndrymh0b18 enq: TM - contention        491
864jbgkbpvcnf enq: TM - contention        378
9cn21y7hbya46 enq: TM - contention        366
8419w8jnhfa3m enq: TM - contention        331
f71jbkdy94pph enq: TM - contention        46
2zpyy8wbnp5d0 enq: TM - contention        33
0d6gq7b9j522p enq: TM - contention        15
dmpafdd7anvrw enq: TM - contention        1
3fbwp7qdqxk9v enq: TM - contention        1

select sql_id,event, count(1)
from dba_hist_active_sess_history
where sample_time > to_date('05032015 10:00:00','ddmmrrrr hh24:mi:ss')
and   sample_time < to_date('05032015 11:00:00','ddmmrrrr hh24:mi:ss')
and event like '%TM%'
group by sql_id,event
order by 3 desc;

53xthsbv8d7yk     enq: TM - contention    4878
1w95zpw2fy021     enq: TM - contention    340
35ghv3bugv22a     enq: TM - contention    264
8b9nqpzs24n0t     enq: TM - contention    163
aqdaq2ybqkrpa     enq: TM - contention    156
50gygyqsha3nr     enq: TM - contention    103
fzfvzhjg0p6y0     enq: TM - contention    82
bs359cfsq4fvc     enq: TM - contention    80
15xpc3um0c3a2     enq: TM - contention    58
d0rndrymh0b18     enq: TM - contention    49
864jbgkbpvcnf     enq: TM - contention    40
9cn21y7hbya46     enq: TM - contention    36
8419w8jnhfa3m     enq: TM - contention    33
f71jbkdy94pph     enq: TM - contention    5
2zpyy8wbnp5d0     enq: TM - contention    3
0d6gq7b9j522p     enq: TM - contention    2

Spot by the way the formidable illustration of the basics mentioned earlier in this article i.e. dba_hist_active_sess_history is a one-in-ten samples of v$active_session_history

48483/10 ~ 4878 
3370/10  ~  340
2635/10  ~ 264
1660/10  ~ 163
1548/10  ~ 156

March 9, 2015

Index compression: two birds with one stone

Filed under: Index — hourim @ 7:52 pm

Have you ever been asked to trouble shoot a performance issue of a complex query with the following restrictions:

  • You are not allowed to change the code of the query because it belongs to a third party software
  • You are not allowed to create a new index because of disk space stress
  • You are supposed to solve the issue without using  neither a SPM baseline nor a SQL Profile

What do you think you still have on your hands to tackle this issue?

I was quite confident that the performance issue was coming, as almost always, from a poor or not representative statistics which ultimately have biased the Optimizer choosing a wrong execution path. The row source execution plan taken this time from the Real Time Sql Monitoring report confirms my initial feeling about non representative statistics as shown by the several 1 estimated cardinality below (Rows Estim):

SQL Plan Monitoring Details (Plan Hash Value=2278065992)
==============================================================================================
| Id |                Operation                 |          Name |  Rows   | Execs |   Rows   |
|    |                                          |               | (Estim) |       | (Actual) |
==============================================================================================
|  0 | SELECT STATEMENT                         |               |         |     1 |     1059 |
|  1 |   HASH GROUP BY                          |               |       1 |     1 |     1059 |
|  2 |    FILTER                                |               |         |     1 |     135K |
|  3 |     NESTED LOOPS                         |               |       1 |     1 |     135K |
|  4 |      NESTED LOOPS                        |               |       1 |     1 |     135K |
|  5 |       NESTED LOOPS OUTER                 |               |       1 |     1 |     135K |
|  6 |        NESTED LOOPS                      |               |       1 |     1 |     135K |
|  7 |         HASH JOIN OUTER                  |               |       1 |     1 |     145K |
|  8 |          NESTED LOOPS                    |               |         |     1 |     145K |
|  9 |           NESTED LOOPS                   |               |       1 |     1 |     145K |
| 10 |            NESTED LOOPS                  |               |       1 |     1 |     146K |
| 11 |             NESTED LOOPS                 |               |       1 |     1 |     146K |
| 12 |              NESTED LOOPS                |               |       1 |     1 |        1 |
| 13 |               NESTED LOOPS               |               |       1 |     1 |        1 |
| 14 |                FAST DUAL                 |               |       1 |     1 |        1 |
| 15 |                FAST DUAL                 |               |       1 |     1 |        1 |
| 16 |               FAST DUAL                  |               |       1 |     1 |        1 |
| 17 |              TABLE ACCESS BY INDEX ROWID | TABLE_XY      |       1 |     1 |     146K |
| 18 |               INDEX RANGE SCAN           | IDX_TABLE_XY23|       1 |     1 |      12M |
==============================================================================================
17 - filter(("AU"."NEW_VALUE"=:SYS_B_119 AND "AU"."ATTRIBUTE_NAME"=:SYS_B_118))
18 - access("AU"."UPDATED_DATE">=TO_DATE(:SYS_B_001||TO_CHAR(EXTRACT(MONTH FROM
              ADD_MONTHS(CURRENT_DATE,(-:SYS_B_000))))||:SYS_B_002||TO_CHAR(EXTRACT(YEAR FROM
              ADD_MONTHS(CURRENT_DATE,(-:SYS_B_000))))||:SYS_B_003,:SYS_B_004)
              AND "AU"."COURSE_NAME"=:SYS_B_117
              AND "AU"."UPDATED_DATE"<=TO_DATE(TO_CHAR(LAST_DAY(ADD_MONTHS(CURRENT_DATE,(-
             :SYS_B_000))),:SYS_B_005)||:SYS_B_006,:SYS_B_007))
     filter("AU"."COURSE_NAME"=:SYS_B_117)

Collecting statistics adequately was of course the right path to follow; however looking at the above execution plan I have realized that the most consuming operation was an index range scan followed by a table access by index rowid. The index operation at line 18 was supplying its parent operation at line 17 with 12 million worth of rows from which the filter at this line has allowed only 2% (146K) of the rows to survive its elimination. This is a classical problem of imprecise index wasting a lot of time and energy in throwing away rows that should have been eliminated earlier.  There is also another indication about the imprecision of the index used at operation line 18. Its predicate part contains both an access and a filter operation. In order to make the picture clear here below is the index definition and the predicate part used in the problematic query:

IDX_TABLE_XY23 (UPDATED_DATE, COURSE_NAME)

FROM TABLE_XY     
WHERE AU.COURSE_NAME = ‘Point’
AND AU.UPDATED_DATE  >= PidDates.END_DATE

The index has been defined to start with the column on which an inequality is applied. We should instead always place at the leading edge of the index the column on which we have the intention to apply an equality predicate. One solution would be to reverse the above index columns. But I was not going to do that without checking the whole application looking for queries using the UPDATED_DATE column in equality predicate so that reversing the columns of that index will harm them. Hopefully there were no such queries and the way was already paved for me to proceed to that index columns order reversing proposition.

Few minutes before going ahead I remembered that this application is suffering from a disk space shortage so that compressing the index would certainly help. Moreover the “new” leading index column is the less repetitive one which hopefully will give a better level of compressibility:

SQL> select column_name, num_distinct
     from all_tab_col_statistics
     where table_name = 'TABLE_XY'
     and column_name in ('UPDATED_DATE','COURSE_NAME');

COLUMN_NAME                    NUM_DISTINCT
------------------------------ ------------
UPDATED_DATE                   1309016064
COURSE_NAME                    63
SQL> create index idx_TABLE_XYmh on TABLE_XY(COURSE_NAME, UPDATED_DATE) compress 1;

And here are the results

Before the new index

1059 rows selected.
Elapsed: 00:32:37.34

Global Stats
================================================================
| Elapsed |   Cpu   |    IO    | Fetch | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) | Calls |  Gets  | Reqs | Bytes |
================================================================
|    2020 |     154 |     1867 |    72 |    10M |   1M |   9GB |
================================================================

After the new index

1059 rows selected.
Elapsed: 00:19:56.08

Global Stats
========================================================
| Elapsed |   Cpu   |    IO    | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) |  Gets  | Reqs | Bytes |
========================================================
|    1204 |      70 |     1134 |     9M | 463K |   4GB |
========================================================

The query execution time dropped from 33 minutes to 19 minutes and so did the logical and physical I/O.

Of course I am still throwing the same amount of rows at the table level a shown below

SQL Plan Monitoring Details (Plan Hash Value=2577971998)
===================================================================
| Id    |  Operation                 |          Name   |   Rows   |
|       |                            |                 | (Actual) |
===================================================================
|    17 |TABLE ACCESS BY INDEX ROWID | TABLE_XY        |     146K |
|    18 | INDEX RANGE SCAN           | IDX_TABLE_XYMH  |      12M |
===================================================================
17 - filter(("AU"."NEW_VALUE"=:SYS_B_119 AND "AU"."ATTRIBUTE_NAME"=:SYS_B_118))
18 - access("AU"."COURSE_NAME"=:SYS_B_117 AND
           "AU"."UPDATED_DATE">=TO_DATE(:SYS_B_001||TO_CHAR(EXTRACT(MONTH FROM
            ADD_MONTHS(CURRENT_DATE,(-:SYS_B_000))))||:SYS_B_002||TO_CHAR(EXTRACT(YEAR FROM
            ADD_MONTHS(CURRENT_DATE,(-:SYS_B_000))))||:SYS_B_003,:SYS_B_004) AND
           "AU"."UPDATED_DATE"<=TO_DATE(TO_CHAR(LAST_DAY(ADD_MONTHS(CURRENT_DATE,(-:SYS_B_000)))
           ,:SYS_B_005)||:SYS_B_006,:SYS_B_007))

But I have drastically reduced the part of the index I was traversing before reversing the columns order and I have got rid of the filter at the index predicate.  And finally, because it is the title of this article, by compressing the index I have gained 67GB

SQL> select segment_name, trunc(bytes/1024/1024/1024) GB
    from dba_segments
    where segment_type = 'INDEX'
    and segment_name in ('IDX_TABLE_XYMH','IDX_TABLE_XY23');

SEGMENT_NAME                           GB
------------------------------ ----------
IDX_TABLE_XYMH                        103
IDX_TABLE_XY23                        170

 

Create a free website or blog at WordPress.com.

Tony's Oracle Tips

Tony Hasler's light hearted approach to learning about Oracle

Richard Foote's Oracle Blog

Focusing Specifically On Oracle Indexes, Database Administration and Some Great Music

Hatem Mahmoud's blog

Just another blog : Databases, Linux and other stuffs

Mohamed Houri’s Oracle Notes

Qui se conçoit bien s’énonce clairement

Oracle Diagnostician

Performance troubleshooting as exact science

Raheel's Blog

Things I have learnt as Oracle DBA

Coskan's Approach to Oracle

What I learned about Oracle

So Many Oracle Manuals, So Little Time

“Books to the ceiling, Books to the sky, My pile of books is a mile high. How I love them! How I need them! I'll have a long beard by the time I read them”—Lobel, Arnold. Whiskers and Rhymes. William Morrow & Co, 1988.

Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Oracle Scratchpad

Just another Oracle weblog

OraStory

Dominic Brooks on Oracle Performance, Tuning, Data Quality & Sensible Design ... (Now with added Sets Appeal)