Mohamed Houri’s Oracle Notes

August 5, 2015

Flash back causing library cache: mutex X

Filed under: Oracle — hourim @ 5:28 pm

Recently one of our applications suffered from a severe performance issue. It is an application running on a database(11.2.0.4.0) used to validate a pre-production release. This performance issue has delayed the campaign test and the validation process for more than 3 days. The ASH data taken during the altered performance period shows this:

SQL> select event, count(1)
    from gv$active_session_history
    where sample_time between to_date('15072015 16:00:00', 'ddmmyyyy hh24:mi:ss')
                      and     to_date('15072015 16:30:00', 'ddmmyyyy hh24:mi:ss')
    group by event
    order by 2 desc;

EVENT                                COUNT(1)
----------------------------------- ----------
library cache: mutex X                    3928
kksfbc child completion                    655
cursor: pin S wait on X                    580
PX Deq: Slave Session Stats                278
                                           136
db file sequential read                    112
cursor: pin S                               35
null event                                  26
latch: shared pool                          15
cursor: mutex S                             13
library cache lock                          11
read by other session                       10
log file parallel write                      5
PX Deq: Signal ACK EXT                       3
os thread startup                            3
log file sync                                2
latch free                                   1
db file parallel write                       1
SQL*Net more data from client                1
enq: PS - contention                         1
cursor: mutex X                              1
direct path read                             1
control file sequential read                 1
CSS operation: action                        1

As you can notice the dominant wait event is:

EVENT                          COUNT(1)
------------------------------ -------
library cache: mutex X         3928

A library cache: mutex X wait event represents a concurrency wait event that is a part of 6 mutexes wait events

	CURSOR : pin S
	CURSOR : pin X
	CURSOR : pin S wait on X
	CURSOR : mutex S
	CURSOR : mutex X
	Library cache : mutex X

Mutexes are similar to locks except that they lock object in shared memory rather than rows in tables and indexes. Whenever a session wants to read or write into the library cache shared memory it needs to pin that object (cursor generally) and acquire a mutex on it. If another session wants simultaneously to read the same piece of memory it will try to acquire a mutex on it. This session might then wait on one of those library or cursor mutex wait event since another session has already preceded it and has still not released the latch (the mutex).

So, extended to my actual case what has been exaggerated so that a library cache: mutex X has made the database unusable?

SQL> select
       sql_id
      ,session_id
      ,in_parse
      ,in_sql_execution
    from
      gv$active_session_history
    where sample_time between to_date('15072015 16:00:00', 'ddmmyyyy hh24:mi:ss')
                      and     to_date('15072015 16:30:00', 'ddmmyyyy hh24:mi:ss')
    and event = 'library cache: mutex X'
    order by sql_id; 

SQL_ID        SQL_CHILD_NUMBER SESSION_ID IN_PARSE   IN_SQL_EXECUTION
------------- ---------------- ---------- ---------- ----------------
20f2kut7fg4g0               -1          8 Y          N
20f2kut7fg4g0               -1         20 Y          N
20f2kut7fg4g0                1         24 N          Y
20f2kut7fg4g0               -1         40 Y          N
20f2kut7fg4g0               -1         60 Y          N
20f2kut7fg4g0               -1         88 Y          N
20f2kut7fg4g0                1         89 N          Y
20f2kut7fg4g0                1         92 N          Y
20f2kut7fg4g0               -1        105 Y          N
20f2kut7fg4g0               -1        106 Y          N
20f2kut7fg4g0                1        109 N          Y
20f2kut7fg4g0               -1        124 Y          N
20f2kut7fg4g0               -1        128 Y          N
20f2kut7fg4g0               -1        143 Y          N
20f2kut7fg4g0               -1        157 Y          N
20f2kut7fg4g0                1        159 N          Y
20f2kut7fg4g0               -1        160 Y          N
20f2kut7fg4g0               -1        161 Y          N
20f2kut7fg4g0                1        172 N          Y
20f2kut7fg4g0                1        178 N          Y
20f2kut7fg4g0               -1        191 Y          N
20f2kut7fg4g0               -1        192 Y          N
20f2kut7fg4g0               -1        194 Y          N
20f2kut7fg4g0                1        209 N          Y
20f2kut7fg4g0               -1        223 Y          N
20f2kut7fg4g0                1        229 N          Y
20f2kut7fg4g0               -1        241 Y          N
20f2kut7fg4g0               -1        246 Y          N
20f2kut7fg4g0               -1        258 Y          N
20f2kut7fg4g0                1        259 N          Y
20f2kut7fg4g0                1        280 N          Y
20f2kut7fg4g0                1        294 N          Y
20f2kut7fg4g0               -1        309 Y          N
20f2kut7fg4g0               -1        310 Y          N
20f2kut7fg4g0               -1        328 Y          N
20f2kut7fg4g0                1        348 N          Y
20f2kut7fg4g0               -1        382 Y          N
20f2kut7fg4g0               -1        413 Y          N
20f2kut7fg4g0               -1        415 Y          N
20f2kut7fg4g0                1        428 N          Y
20f2kut7fg4g0               -1        449 Y          N
20f2kut7fg4g0               -1        450 Y          N
20f2kut7fg4g0               -1        462 Y          N
20f2kut7fg4g0                1        467 N          Y
20f2kut7fg4g0               -1        480 Y          N
20f2kut7fg4g0               -1        484 Y          N
20f2kut7fg4g0                1        516 N          Y
20f2kut7fg4g0               -1        533 Y          N
20f2kut7fg4g0                1        535 N          Y
20f2kut7fg4g0               -1        546 Y          N
20f2kut7fg4g0               -1        565 Y          N
20f2kut7fg4g0                1        568 N          Y
20f2kut7fg4g0               -1        584 Y          N
20f2kut7fg4g0               -1        585 Y          N
20f2kut7fg4g0               -1        601 Y          N
20f2kut7fg4g0               -1        602 Y          N
20f2kut7fg4g0               -1        615 Y          N
20f2kut7fg4g0               -1        619 Y          N
20f2kut7fg4g0               -1        635 Y          N
20f2kut7fg4g0               -1        652 Y          N
20f2kut7fg4g0               -1        667 Y          N
20f2kut7fg4g0               -1        668 Y          N
20f2kut7fg4g0               -1        687 Y          N
20f2kut7fg4g0               -1        705 Y          N
20f2kut7fg4g0               -1        717 Y          N
20f2kut7fg4g0               -1        721 Y          N
20f2kut7fg4g0               -1        733 Y          N
20f2kut7fg4g0               -1        735 Y          N
20f2kut7fg4g0               -1        753 Y          N
20f2kut7fg4g0               -1        754 Y          N
20f2kut7fg4g0                1        770 N          Y
20f2kut7fg4g0               -1        773 Y          N
20f2kut7fg4g0               -1        785 Y          N
20f2kut7fg4g0               -1        786 Y          N
20f2kut7fg4g0               -1        804 Y          N

75 rows selected.

I have limited the output to just one sql_id (20f2kut7fg4g0) in order to keep the explanation clear and simple.

What does represent this particular sql_id which is executed by 75 different sessions that are sometimes in parse and sometimes in execution?

SQL> with got_my_sql_id
    as ( select sql_id, count(1)
    from gv$active_session_history
    where sample_time between to_date('16072015 09:30:00', 'ddmmyyyy hh24:mi:ss')
                      and     to_date('16072015 10:30:00', 'ddmmyyyy hh24:mi:ss')
    and event  = 'library cache: mutex X'
    group by sql_id)
    select distinct sql_id, sql_text
    from v$sql b
   where exists (select null
                 from got_my_sql_id a
                 where a.sql_id = b.sql_id)
   order by sql_id; 

SQL_ID        SQL_TEXT
------------ ----------------------------------------------------------------------
20f2kut7fg4g0 /* Flashback Table */ INSERT /*+ PARALLEL(S, DEFAULT) PARALLEL(T,
                 DEFAULT) */ INTO "DEV_ZXX"."CLOSED_DAY" SELECT
              /*+ USE_NL(S) ORDERED PARALLEL(S, DEFAULT) PARALLEL(T, DEFAULT) */
              S.* FROM SYS_TEMP_FBT T , "DEV_ZXX"."CLOSED_DAY"
              as of SCN :1 S WHERE T.rid = S.
              rowid and T.action = 'I' and T.object# = :2

The above piece of SQL code is generated by Oracle behind the scene when flashing backward a content of a given table. And this is exactly what this client was doing. At the end of their pre-production test campaign they flash back a certain number of tables to the data they have at the beginning of the test. And since the generated code uses parallel run with default degree it has produce such a kind of monitored execution plan


Parallel Execution Details (DOP=96 , Servers Allocated=96) 

SQL Plan Monitoring Details (Plan Hash Value=4258977226)
===============================================================================
| Id |        Operation        |      Name       |  Rows   | Execs |   Rows   |
|    |                         |                 | (Estim) |       | (Actual) |
===============================================================================
|  0 | INSERT STATEMENT        |                 |         |     1 |          |
|  1 |   LOAD AS SELECT        |                 |         |     1 |          |
|  2 |    PX COORDINATOR       |                 |         |    82 |          |
|  3 |     PX SEND QC (RANDOM) | :TQ10000        |     322 |    81 |          |
|  4 |      PX BLOCK ITERATOR  |                 |     322 |    81 |          |
|  5 |       TABLE ACCESS FULL | TABLE_RULE_SUP  |     322 |     4 |          |
===============================================================================

For every flashed back table Oracle started 96 parallel servers (96 sessions) in order to do a simple insert statement causing the observed library cache mutex X wait event. The DOP 96 is the maximum DOP which represents in fact the default DOP determined by the following simplified formula:

DOP = PARALLEL_THREADS_PER_CPU x CPU_COUNT
DOP = 2 x 48 = 96
SQL> show parameter parallel

NAME                               TYPE        VALUE
---------------------------------- ----------- ----------
fast_start_parallel_rollback       string      LOW
parallel_adaptive_multi_user       boolean     TRUE
parallel_automatic_tuning          boolean     FALSE
parallel_degree_limit              string      CPU
parallel_degree_policy             string      MANUAL
parallel_execution_message_size    integer     16384
parallel_force_local               boolean     FALSE
parallel_instance_group            string
parallel_io_cap_enabled            boolean     FALSE
parallel_max_servers               integer     100
parallel_min_percent               integer     0
parallel_min_servers               integer     0
parallel_min_time_threshold        string      AUTO
parallel_server                    boolean     FALSE
parallel_server_instances          integer     1
parallel_servers_target            integer     100
_parallel_syspls_obey_force        boolean     TRUE
parallel_threads_per_cpu           integer     2
recovery_parallelism               integer     0
SQL> show parameter cpu

NAME                           TYPE        VALUE
------------------------------ ----------- -----
cpu_count                       integer     48
parallel_threads_per_cpu        integer     2
resource_manager_cpu_allocation integer     48

Having no possibility to hint the internal flash back Oracle code so that it will not execute in parallel, all what I have been left with is to pre-empt Oracle from starting a huge number of parallel process by limiting the parallel_max_servers parameter to 8 and, and as such, the maximum DOP will be limited to 8 whatever the cpu_count is.

Once this done I observed the following new situation for one flashed back sql_id (a5u912v53t11t)


Global Information
------------------------------
 Status              :  DONE
 Instance ID         :  1
 Session             :  XXXXX (172:40851)
 SQL ID              :  a5u912v53t11t
 SQL Execution ID    :  16777236
 Execution Started   :  07/16/2015 11:21:55
 First Refresh Time  :  07/16/2015 11:21:55
 Last Refresh Time   :  07/16/2015 11:21:55
 Duration            :  .011388s
 Module/Action       :  JDBC Thin Client/-
 Service             :  SYS$USERS
 Program             :  JDBC Thin Client
 DOP Downgrade       :  92%                       

Global Stats
=======================================================
| Elapsed |   Cpu   | Concurrency |  Other   | Buffer |
| Time(s) | Time(s) |  Waits(s)   | Waits(s) |  Gets  |
=======================================================
|    0.05 |    0.00 |        0.04 |     0.00 |     19 |
=======================================================

Parallel Execution Details (DOP=8 , Servers Requested=96 , Servers Allocated=8)
==============================================================================
|      Name      | Type  | Server# | Elapsed |   Cpu   | Concurrency |Buffer |
|                |       |         | Time(s) | Time(s) |  Waits(s)   | Gets  |
==============================================================================
| PX Coordinator | QC    |         |    0.00 |    0.00 |             |     4 |
| p000           | Set 1 |       1 |    0.01 |         |        0.01 |     3 |
| p001           | Set 1 |       2 |    0.01 |         |        0.01 |     3 |
| p002           | Set 1 |       3 |    0.00 |         |        0.00 |     3 |
| p003           | Set 1 |       4 |    0.00 |         |        0.00 |     3 |
| p004           | Set 1 |       5 |    0.00 |    0.00 |        0.00 |     3 |
| p005           | Set 1 |       6 |    0.01 |         |        0.01 |       |
| p006           | Set 1 |       7 |    0.00 |         |        0.00 |       |
| p007           | Set 1 |       8 |    0.01 |         |        0.01 |       |
==============================================================================

SQL Plan Monitoring Details (Plan Hash Value=96405358)
============================================================
| Id |        Operation        |   Name   |  Rows   | Cost |
|    |                         |          | (Estim) |      |
============================================================
|  0 | INSERT STATEMENT        |          |         |      |
|  1 |   LOAD AS SELECT        |          |         |      |
|  2 |    PX COORDINATOR       |          |         |      |
|  3 |     PX SEND QC (RANDOM) | :TQ10000 |     409 |    2 |
|  4 |      PX BLOCK ITERATOR  |          |     409 |    2 |
|  5 |       TABLE ACCESS FULL | TABLE_CLS|     409 |    2 |
============================================================

Notice how Oracle has serviced the insert statement with 8 parallel servers instead of the requested 96 servers. This is a clear demonstration of how to bound the default DOP

Parallel Execution Details (DOP=8, Servers Requested=96, Servers Allocated=8)

Unfortunately, despite this implicit parallel run limitation, the application was still suffering from the same library cache symptoms (less than before thought) as shown below:

SQL> select event, count(1)
    from gv$active_session_history
    where sample_time between to_date('16072015 10:57:00', 'ddmmyyyy hh24:mi:ss')
                      and     to_date('16072015 11:30:00', 'ddmmyyyy hh24:mi:ss')
    group by event
    order by 2 desc;

EVENT                                  COUNT(1)
------------------------------------ ----------
library cache: mutex X                      518
                                            382
db file sequential read                     269
read by other session                        42
kksfbc child completion                      37
null event                                   31
log file parallel write                      18
cursor: pin S wait on X                      12
latch: shared pool                            7
cursor: pin S                                 7
log file sync                                 5
latch free                                    5
enq: RO - fast object reuse                   3
SQL*Net more data from client                 2
db file parallel write                        2
enq: CR - block range reuse ckpt              1
os thread startup                             1
SQL> select sql_id, session_id,in_parse, in_sql_execution
    from gv$active_session_history
    where sample_time between to_date('16072015 10:57:00', 'ddmmyyyy hh24:mi:ss'
                      and     to_date('16072015 11:30:00', 'ddmmyyyy hh24:mi:ss'
    and event = 'library cache: mutex X'
    order by sql_id;

SQL_ID        SESSION_ID IN_PARSE IN_SQL_EXEC
------------- ---------- -------- -----------
a5u912v53t11t        516 Y 	 N
a5u912v53t11t        494 Y 	 N
a5u912v53t11t        343 Y 	 N
a5u912v53t11t        482 Y 	 N

Finally we agreed with the client to disable parallelism (by setting the parallel_max_servers parameter value to 1) so that the flash back treatment will go serially:

SQL> show parameter parallel_max_servers

NAME                           TYPE        VALUE
------------------------------ ----------- -----
parallel_max_servers           integer     1

Once this has been done the test campaign finally started to perform very quickly with the following picture from ASH:

SQL> select event, count(1)
    from gv$active_session_history
    where sample_time between to_date('16072015 14:15:00', 'ddmmyyyy hh24:mi:ss')
                      and     to_date('16072015 15:30:00', 'ddmmyyyy hh24:mi:ss')
    group by event
    order by 2 desc;

EVENT                                  COUNT(1)
------------------------------------- ----------
                                             966
db file sequential read                      375
db file scattered read                        49
log file parallel write                       46
log file sync                                 22
db file parallel write                        13
null event                                     8
local write wait                               7
SQL*Net more data from client                  5
os thread startup                              3
reliable message                               3
enq: PS - contention                           3
enq: RO - fast object reuse                    3
cursor: pin S wait on X                        1
direct path read                               1
Disk file operations I/O                       1
enq: CR - block range reuse ckpt               1
enq: TX - row lock contention                  1

The flashed back treatment ceases completely from being run in parallel and the campaign test started again to perform quickly.

This is not an invitation to go with drastic and brutal workaround to reduce the effect of many sessions waked up due to a very high degree of parallelism itself due to the default maximum DOP. It represents a demonstration on

  • how a high degree of parallelism can affect the locking in the library cache
  • how the parallel_max_servers parameter can bound the DOP of your query

2 Comments »

  1. As usual…perfect.
    Thanks for sharing.
    Yahya

    Comment by Yahya — August 5, 2015 @ 8:42 pm | Reply

  2. Yahya

    Thanks a lot for your encouraging words.

    Best regards
    Mohamed

    Comment by hourim — August 6, 2015 @ 4:24 pm | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

Tony's Oracle Tips

Tony Hasler's light hearted approach to learning about Oracle

Richard Foote's Oracle Blog

Focusing Specifically On Oracle Indexes, Database Administration and Some Great Music

Hatem Mahmoud Oracle's blog

Just another Oracle blog : Database topics and techniques

Mohamed Houri’s Oracle Notes

Qui se conçoit bien s’énonce clairement

Oracle Diagnostician

Performance troubleshooting as exact science

Raheel's Blog

Things I have learnt as Oracle DBA

Coskan's Approach to Oracle

What I learned about Oracle

So Many Oracle Manuals, So Little Time

“Books to the ceiling, Books to the sky, My pile of books is a mile high. How I love them! How I need them! I'll have a long beard by the time I read them”—Lobel, Arnold. Whiskers and Rhymes. William Morrow & Co, 1988.

EU Careers info

Your career in the European Union

Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Oracle Scratchpad

Just another Oracle weblog

OraStory

Dominic Brooks on Oracle Performance, Tuning, Data Quality & Sensible Design ... (Now with added Sets Appeal)

%d bloggers like this: