Mohamed Houri’s Oracle Notes

July 9, 2015

Stressed ASH

Filed under: Oracle — hourim @ 5:29 pm

It is well known that any record found in dba_hist_active_session_history has inevitably been routed there from v$active_session_history. If so, then how could we interpret the following cut & past from a running production system?

ASH first

SQL> select event, count(1)
    from gv$active_session_history
    where sample_time between to_date('06072015 18:30:00', 'ddmmyyyy hh24:mi:ss')
                      and     to_date('06072015 19:30:00', 'ddmmyyyy hh24:mi:ss')
    group by event
    order by 2 desc;

EVENT                                                              COUNT(1)
---------------------------------------------------------------- ----------
                                                                        372
direct path read                                                        185
log file parallel write                                                  94
Disk file Mirror Read                                                    22
control file sequential read                                             20
control file parallel write                                              18
direct path write temp                                                   16
Streams AQ: qmn coordinator waiting for slave to start                   12
db file parallel read                                                    11
gc cr multi block request                                                 6
enq: KO - fast object checkpoint                                          4
db file sequential read                                                   3
ges inquiry response                                                      3
os thread startup                                                         2
PX Deq: Signal ACK RSG                                                    2
enq: CF - contention                                                      1
PX Deq: Slave Session Stats                                               1
Disk file operations I/O                                                  1
IPC send completion sync                                                  1
reliable message                                                          1
null event                                                                1
enq: CO - master slave det                                                1
db file parallel write                                                    1
gc current block 2-way                                                    1

AWR next

SQL> select event, count(1)
    from dba_hist_active_sess_history
    where sample_time between to_date('06072015 18:30:00', 'ddmmyyyy hh24:mi:ss')
                      and     to_date('06072015 19:30:00', 'ddmmyyyy hh24:mi:ss')
    group by event
    order by 2 desc;

EVENT                                                              COUNT(1)
---------------------------------------------------------------- ----------
SQL*Net break/reset to client                                         12950
enq: TM - contention                                                  12712
                                                                        624
db file sequential read                                                 386
enq: TX - row lock contention                                           259
SQL*Net message from dblink                                              74
direct path read                                                         62
SQL*Net more data from dblink                                            27
log file parallel write                                                  26
log file sync                                                            15
SQL*Net more data from client                                             9
control file sequential read                                              7
Disk file Mirror Read                                                     6
gc cr grant 2-way                                                         5
db file parallel write                                                    4
read by other session                                                     3
control file parallel write                                               3
Streams AQ: qmn coordinator waiting for slave to start                    3
log file sequential read                                                  2
direct path read temp                                                     2
enq: KO - fast object checkpoint                                          2
gc cr multi block request                                                 1
CSS initialization                                                        1
gc current block 2-way                                                    1
reliable message                                                          1
db file parallel read                                                     1
gc buffer busy acquire                                                    1
ges inquiry response                                                      1
direct path write temp                                                    1
rdbms ipc message                                                         1
os thread startup                                                         1

12,950 snapshots of SQL*Net break/reset to client and 12,712 snapshots of an enq: TM – contention wait events in AWR not found in ASH. How can we interpret this situation?

This 11.2.0.4.0 database is implemented under a RAC infrastructure with 2 instances. Let’s look at the ASH of the two instances separately

Instance 1 first

SQL> select event, count(1)
    from v$active_session_history
    where sample_time between to_date('06072015 18:30:00', 'ddmmyyyy hh24:mi:ss')
                      and     to_date('06072015 19:30:00', 'ddmmyyyy hh24:mi:ss')
    group by event
    order by 2 desc;

 no rows selected

Instance 2 next

SQL> select event, count(1)
    from v$active_session_history
    where sample_time between to_date('06072015 18:30:00', 'ddmmyyyy hh24:mi:ss')
                      and     to_date('06072015 19:30:00', 'ddmmyyyy hh24:mi:ss')
    group by event
    order by 2 desc;

EVENT                                                              COUNT(1)
---------------------------------------------------------------- ----------
                                                                        372
direct path read                                                        185
log file parallel write                                                  94
Disk file Mirror Read                                                    22
control file sequential read                                             20
control file parallel write                                              18
direct path write temp                                                   16
Streams AQ: qmn coordinator waiting for slave to start                   12
db file parallel read                                                    11
gc cr multi block request                                                 6
enq: KO - fast object checkpoint                                          4
db file sequential read                                                   3
ges inquiry response                                                      3
os thread startup                                                         2
PX Deq: Signal ACK RSG                                                    2
enq: CF - contention                                                      1
PX Deq: Slave Session Stats                                               1
Disk file operations I/O                                                  1
IPC send completion sync                                                  1
reliable message                                                          1
null event                                                                1
enq: CO - master slave det                                                1
db file parallel write                                                    1
gc current block 2-way                                                    1

All what is sampled in ASH in that specific time interval is coming from the second instance while the first instance doesn’t report any record for the corresponding time interval. This inevitably questions either the ash size of instance one or an imbalanced workload between the two instances:

ASH size first

SQL> select
  2        inst_id
  3        ,total_size
  4      from gv$ash_info;

   INST_ID TOTAL_SIZE
---------- ----------
         1  100663296
         2  100663296

ASH Activity next

SQL> select
        inst_id
       ,total_size
       ,awr_flush_emergency_count
     from gv$ash_info;

   INST_ID TOTAL_SIZE AWR_FLUSH_EMERGENCY_COUNT
---------- ---------- -------------------------
         1  100663296                       136
         2  100663296                         0

Typically the activity is mainly oriented towards instance 1 and the abnormal and unusual 12,712 SQL*Net break/reset to client wait events have exacerbated the rate of insert into ASH buffers of instance one generating the 136 awr_flush_emergency_count and, as such, the discrepancies between ASH and AWR.

This is also confirmed by the difference in the ASH retention period between the two instances

Instance 1 first where only 3 hours of ASH data are kept

SQL> select min(sample_time), max(sample_time)
  2  from v$active_session_history;

MIN(SAMPLE_TIME)                         MAX(SAMPLE_TIME)
---------------------------------------  -------------------------
08-JUL-15 05.51.20.502 AM                08-JUL-15 08.35.48.233 AM

Instance 2 next where several days worth of ASH data are still present

SQL> select min(sample_time), max(sample_time)
  2  from v$active_session_history;

MIN(SAMPLE_TIME)                         MAX(SAMPLE_TIME)
---------------------------------------  -------------------------
25-JUN-15 20.01.43                       08-JUL-15 08.37.17.233 AM

The solution would be one of the following points (I think in the order of priority):

  • Solve this SQL*Net break/reset to client issue which is dramatically filling up the ash buffer causing unexpected rapid flush of important and more precise data
  • Balance the work load activity between the two instances
  • Increase the ash size of the instance 1 by means of alter system set “_ash_size”=25165824;

In the next article I will explain how I have identified what is causing this unusual SQL*Net break/reset to client wait events.

July 2, 2015

Don’t pre-empt the CBO from doing its work

Filed under: Oracle — hourim @ 2:03 pm

This is the last part of the parallel insert/select saga. As a reminder below is the two preceding episodes:

  •  Part 1: where I have explained why I was unable to get the corresponding SQL monitoring report because of the _sqlmon_max_planlines parameter.
  •  Part 2: where I have explained the oddity shown by the SQL monitoring report when monitoring non active parallel server for more than 30 minutes.

In Part 3 I will share with you how I have succeeded to solve this issue and convinced people to not pre-empt the Oracle optimizer from doing its work.

Thanks to the monitoring of this insert/select I have succeeded to isolate the part of the execution plan that needs absolutely to be tuned:

Error: ORA-12805
------------------------------
ORA-12805: parallel query server died unexpectedly

Global Information
------------------------------
 Status                                 :  DONE (ERROR)
 Instance ID                            :  2
 SQL ID                                 :  bg7h7s8sb5mnt
 SQL Execution ID                       :  33554432
 Execution Started                      :  06/24/2015 05:06:14
 First Refresh Time                     :  06/24/2015 05:06:21
 Last Refresh Time                      :  06/24/2015 09:05:10
 Duration                               :  14336s
 DOP Downgrade                          :  50%                 

Global Stats
============================================================================================
| Elapsed |   Cpu   |    IO    | Concurrency | Cluster  |  Other   | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) |  Waits(s)   | Waits(s) | Waits(s) |  Gets  | Reqs | Bytes |
============================================================================================
|   38403 |   35816 |     0.42 |        2581 |     0.16 |     6.09 |     7G |  103 | 824KB |
============================================================================================

SQL Plan Monitoring Details (Plan Hash Value=3668294770)
======================================================================================================
| Id  |                Operation         |             Name  |  Rows   | Execs |   Rows   | Activity |
|     |                                  |                   | (Estim) |       | (Actual) |   (%)    |
======================================================================================================
| 357 |VIEW PUSHED PREDICATE             | NAEHCE            |      59 | 23570 |    23541 |          |
| 358 | NESTED LOOPS                     |                   |      2M | 23570 |    23541 |     0.05 |
| 359 |  INDEX FAST FULL SCAN            | TABLEIND1         |   27077 | 23570 |     667M |     0.19 |
| 360 |  VIEW                            | VW_JF_SET$E6DCA8A3|       1 |  667M |    23541 |     0.10 |
| 361 |   UNION ALL PUSHED PREDICATE     |                   |         |  667M |    23541 |    30.59 |
| 362 |    NESTED LOOPS                  |                   |       1 |  667M |     1140 |     0.12 |
| 363 |     TABLE ACCESS BY INDEX ROWID  | TABLE2            |       1 |  667M |    23566 |     1.25 |
| 364 |      INDEX UNIQUE SCAN           | IDX_TABLE2        |       1 |  667M |     667M |    17.81 |
| 365 |     TABLE ACCESS BY INDEX ROWID  | TABLE3            |       1 | 23566 |     1140 |          |
| 366 |      INDEX RANGE SCAN            | IDX_TABLE3        |      40 | 23566 |     174K |          |
| 367 |    NESTED LOOPS                  |                   |       1 |  667M |    22401 |     0.11 |
| 368 |     TABLE ACCESS BY INDEX ROWID  | TABLE2            |       1 |  667M |    23566 |     1.27 |
| 369 |      INDEX UNIQUE SCAN           | IDX_TABLE2        |       1 |  667M |     667M |    17.72 |
| 370 |     TABLE ACCESS BY INDEX ROWID  | TABLE3            |       1 | 23566 |    22401 |     0.01 |
| 371 |      INDEX RANGE SCAN            | TABLE31           |      36 | 23566 |       4M |          |

The NESTED LOOPS operation at line 358 has an INDEX FAST FULL SCAN (TABLEIND1) as an outer data source driven an inner data row source represented by an internal view (VW_JF_SET$E6DCA8A3) built by Oracle on the fly. Reduced to the bare minimum it should resemble to this:

SQL Plan Monitoring Details (Plan Hash Value=3668294770)
=====================================================================================
| Id  |                 Operation |             Name   |  Rows   | Execs |   Rows   |
|     |                           |                    | (Estim) |       | (Actual) |
=====================================================================================
| 358 |  NESTED LOOPS             |                    |      2M | 23570 |    23541 |
| 359 |   INDEX FAST FULL SCAN    | TABLEIND1          |   27077 | 23570 |     667M |
| 360 |   VIEW                    | VW_JF_SET$E6DCA8A3 |       1 |  667M |    23541 |

Observe carefully operation at line 359 which is the operation upon which Oracle makes its join method choice. Very often a NESTED LOOPS operation is wrongly chosen by the optimizer because of not accurate estimations made at the first operation of the NESTED LOOPS join. Let’s check the accuracy of the estimation done in this case by Oracle for operation at line 359:

   Rows(Estim) * Execs = 27077 * 23570 = 638204890 ~ 638M
   Rows(Actual = 667M

Estimations done by the optimizer at this step are good. So why in earth Oracle will decide to opt for a NESTED LOOPS operation when it knows prior the execution that the outer data row set will produce 667M of rows inducing the inner operations to be executed 667M times? There is no way that Oracle will opt for this solution unless it is instructed to do so. And indeed, looking to the huge insert/select statement I found, among a tremendous amount of hints, a use_nl (o h) hint which dictates the optimizer to join the TABLEIND table with the rest of the view using a NESTED LOOPS operation. It was then a battle to convince the client that he has to get rid of that hint. What makes the client hesitating is that very often the same insert/select statement (including the use_nl hint) completes in an acceptable time. I was then obliged to explain why despite the presence of the use_nl hint (I am suggesting to be the problem of the performance degradation) the insert/select very often completes in an acceptable execution time. To explain this situation it suffices to get the execution plan of the acceptable execution time (reduced to the bare minimum) and spot the obvious:

SQL Plan Monitoring Details (Plan Hash Value=367892000)
====================================================================================
| Id  |                Operation |             Name   |  Rows   | Execs |   Rows   |
|     |                          |                    | (Estim) |       | (Actual) |
====================================================================================
| 168 |VIEW PUSHED PREDICATE     | NAEHCE             |       1 | 35118 |    35105 |
| 169 | NESTED LOOPS             |                    |       2 | 35118 |    35105 |
| 170 |  VIEW                    | VW_JF_SET$86BE946E |       2 | 35118 |    35105 |
| 182 |  INDEX UNIQUE SCAN       | TABLEIND1          |       1 | 35105 |    35105 |

The join order switched from (TABLEIND1, VW_JF_SET$86BE946E) to (VW_JF_SET$86BE946E,TABLEIND1). As far as the use_nl (o h) hint is not completed by a leading (h o) hint in order to indicate in what order Oracle has to join this two objects, then the choice of the important outer operation is left to Oracle. When the index is chosen as the outer operation, the insert/select statement performs very poorly. However when the same index is used as the inner operation of the join then the insert/select statement performs in an acceptable time.

With that explained, the client has been convinced, the hints disabled and the insert/select re-launched and completed within few seconds thanks to the approriate HASH JOIN operation used by the optimizer:

Global Information
------------------------------
 Status                                 :  DONE
 Instance ID                            :  2
 SQL ID                                 :  9g2a3gstkr7dv
 SQL Execution ID                       :  33554432
 Execution Started                      :  06/24/2015 12:53:49
 First Refresh Time                     :  06/24/2015 12:53:52
 Last Refresh Time                      :  06/24/2015 12:54:05
 Duration                               :  16s                      

Global Stats
============================================================================================
| Elapsed |   Cpu   |    IO    | Concurrency | Cluster  |  Other   | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) |  Waits(s)   | Waits(s) | Waits(s) |  Gets  | Reqs | Bytes |
============================================================================================
|      23 |      21 |     0.91 |        0.03 |     0.22 |     0.31 |     1M |  187 |   1MB |
============================================================================================

SQL Plan Monitoring Details (Plan Hash Value=3871743977)
=================================================================================================
| Id  |                           Operation   |             Name   |  Rows   | Execs |   Rows   |
|     |                                       |                    | (Estim) |       | (Actual) |
=================================================================================================
| 153 |       VIEW                            | NAEHCE             |      2M |     1 |       2M |
| 154 |        HASH JOIN                      |                    |      2M |     1 |       2M |
| 155 |         INDEX FAST FULL SCAN          | TABLEIND1          |   27077 |     1 |    28320 |
| 156 |         VIEW                          | VW_JF_SET$86BE946E |      2M |     1 |       2M |

Spot as well that when the optimizer opted for a HASH JOIN operation the VIEW PUSHED PREDICATE operation and the JPPD (JOIN PREDICATE PUSH DOWN) underlying transformation cease to used because it is occurs only with NESTED LOOP.

Bottom line: always try to supply Oracle with fresh and representative statistics and let it do its job. Don’t pre-empt it from doing its normal work by systematically hinting it when confronted to a performance issue. And when you decide to use hints make sure to hint correctly particularly for the outer (build) table and the inner(probe) table in case of NESTED LOOPS (HASH JOIN) hinted operation.

Create a free website or blog at WordPress.com.

Tony's Oracle Tips

Tony Hasler's light hearted approach to learning about Oracle

Richard Foote's Oracle Blog

Focusing Specifically On Oracle Indexes, Database Administration and Some Great Music

Hatem Mahmoud Oracle's blog

Just another Oracle blog : Database topics and techniques

Mohamed Houri’s Oracle Notes

Qui se conçoit bien s’énonce clairement

Oracle Diagnostician

Performance troubleshooting as exact science

Raheel's Blog

Things I have learnt as Oracle DBA

Coskan's Approach to Oracle

What I learned about Oracle

So Many Oracle Manuals, So Little Time

“Books to the ceiling, Books to the sky, My pile of books is a mile high. How I love them! How I need them! I'll have a long beard by the time I read them”—Lobel, Arnold. Whiskers and Rhymes. William Morrow & Co, 1988.

EU Careers info

Your career in the European Union

Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Oracle Scratchpad

Just another Oracle weblog

OraStory

Dominic Brooks on Oracle Performance, Tuning, Data Quality & Sensible Design ... (Now with added Sets Appeal)