There are so many situations in the Oracle database world that I would have probably never encountered if there hadn’t been so many different real life applications with different problems that needed solving. A couple of days ago, a critical query of one of such applications, experienced severe performance deterioration. When I started looking at this query I’ve first decided to print out the statistics of its historical executions shown below:
SQL> @HistStats Enter value for sql_id: gqj0qgtduh1k7 Enter value for from_date: 01122017 SNAP_BEGIN PLAN_HASH_VALUE EXECS AVG_ETIME AVG_PIO AVG_LIO AVG_ROWS ------------------------- --------------- ---------- ---------- ---------- ---------- ---------- 01/12/17 12:00:44.897 AM 179157465 178 4 357 244223 543 01/12/17 01:00:50.192 AM 179157465 40 6 3976 246934 400 01/12/17 04:00:02.121 AM 179157465 2 2 171 246203 400 ../.. 03/12/17 05:00:22.064 AM 179157465 10 18 4231 245802 400 ../.. 04/12/17 09:00:19.716 AM 179157465 13 1 4 253433 482
The most important characteristic of this query is that it consumes 250K logical I/O per execution and that it can sometime last up to 18 seconds. This is a query generated by Hibernate. Unfortunately, due to the Hibernate obsession of changing table aliases, this query has got a new sql_id when it has been deployed in production via the December release. This is why we lost the track of the query pre-release statistics that would have allowed us to get the previous acceptable execution plan.
Starting from the end
Let’s, exceptionally, start this article by showing the statistics of this query after I have fixed it
SQL> @HistStats Enter value for sql_id: gqj0qgtduh1k7 Enter value for from_date: 01122017 SNAP_BEGIN PLAN_HASH_VALUE EXECS AVG_ETIME AVG_PIO AVG_LIO AVG_ROWS ------------------------- --------------- ---------- ---------- ---------- ---------- ---------- 01/12/17 12:00:44.897 AM 179157465 178 4 357 244223 543 01/12/17 01:00:50.192 AM 179157465 40 6 3976 246934 400 01/12/17 04:00:02.121 AM 179157465 2 2 171 246203 400 ../.. 03/12/17 05:00:22.064 AM 179157465 10 18 4231 245802 400 ../.. 04/12/17 09:00:19.716 AM 179157465 13 1 4 253433 482 ../.. 11/12/17 02:00:04.453 PM 1584349102 4 0 14 17825 482 –- fixed 12/12/17 12:00:38.622 AM 1584349102 67 1 661 13840 522 12/12/17 01:00:42.740 AM 1584349102 40 0 103 7605 852 13/12/17 05:00:27.270 AM 1584349102 10 1 845 16832 400
Notice how, thanks to the fix I will explain in the next section, I have got a remarkable drop of the number of Logical I/O consumption from 244K to 17K per execution on average.
How this has been achieved?
The real time sql monitoring report of the degraded query confirms the bad response time of 12 seconds:
Global Information ------------------------------ Status : DONE (ALL ROWS) Instance ID : 2 Session : xxxx (324:25737) SQL ID : gqj0qgtduh1k7 SQL Execution ID : 16777216 Execution Started : 12/05/2017 06:01:20 First Refresh Time : 12/05/2017 06:01:29 Last Refresh Time : 12/05/2017 06:01:32 Duration : 12s Module/Action : JDBC Thin Client/- Service : xxxxx Program : JDBC Thin Client/ Fetch Calls : 10
This report presents an oddity in the following part of its 54 execution plan operations:
--------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | --------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 398 | |* 39 | TABLE ACCESS BY INDEX ROWID BATCHED| T1 | 1 | 26 | 398 | |* 40 | INDEX RANGE SCAN | IDX_PRD_TYPE | 1 | 2M | 3M | --------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 39 - filter(("PRODUCT_ID"=:1 OR "PRODUCT_ID"=:2 OR "PRODUCT_ID"=:3 OR "PRODUCT_ID"=:4 ../.. OR "PRODUCT_ID"=:398)) 40- access("PRODUCT_TYPE"='M'
This oddity resides in the fact that we went from 3M of rowids supplied by the index at operation 40 to only 398 rows at the table level. Such a situation can have one of the following explanations:
- The CBO has made a very bad choice preferring the IDX_PRD_TYPE index over another existing index with a much better index effective selectivity.
- IDX_PRD_TYPE is the sole index with which Oracle can cover the PRODUCT_ID and PRODUCT_TYPE predicates. And, in this case, it represents an imprecise index that should be re-designed if we want to efficiently cover the above query
The answer is straightforward If you remember that a column having an _ID suffix or prefix in its name refers generally to a primary key. The T1 table has indeed a primary key index defined as follows:
T1_PK (PRODUCT_ID)
In addition the 398 rows returned at the table level are nothing else than the 398 elements constituting the in-list of the corresponding query reproduced here below:
SELECT * FROM t1 WHERE PRODUCT_ID in (:1,:2,:3,:4,:5,....,:398) AND PRODUCT_TYPE = 'M'; Table altered.
While the bad selected index has the following definition:
IDX_PRD_TYPE(PRODUCT_TYPE,COLX)
And the initial question turned to : why the primary key index has not been used in this critical part of the executio plan?
At this stage of the investigations it becomes crystal clear for me that using the primary key index is, to a very large extent, better that using the index on the PRODUCT_TYPE column. This is simply because, if used by the CBO, the primary key index will supply its parent table operation by at most one rowid per element in the in-list predicate. That is for an in-list of 398 elements Oracle will only filter 398 rows at the table level using the PRODUCT_TYPE = ‘M’ filter. That’s largely better than filtering 3M of rows at the table level using the 3M of index rowids supplied by the IDX_PRD_TYPE(PRODUCT_TYPE,COLX) index.
Bear in mind as well that all parameters participating in the index desirability are at their default values. And that the PRODUCT_TYPE column has a remarkable frequency histogram showing perfectly the not evenly distribution of its 13 distinct values.
select column_name ,histogram from user_tab_col_statistics where table_name = 'T1' and column_name = 'PRODUCT_TYPE'; COLUMN_NAME HISTOGRAM ------------ --------------- PRODUCT_TYPE FREQUENCY
The explain plan for command applied when the primary key index is forced and when not used shows that the combined table/index access cost is cheaper when the IDX_PRD_TYPE index is used as the following proves:
------------------------------------------------------------------- | Id | Operation | Name | Rows | Cost (%CPU)| ------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 27 | 519 (1)| | 1 | INLIST ITERATOR | | | | |* 2 | TABLE ACCESS BY INDEX ROWID| T1 | 27 | 519 (1)| |* 3 | INDEX UNIQUE SCAN | T1_PK | 398 | 399 (0)| ------------------------------------------------------------------- -------------------------------------------------------------------------------- | Id | Operation | Name | Rows |Cost (%CPU)| -------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 27 | 58 (0)| |* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| T1 | 27 | 58 (0)| |* 2 | INDEX RANGE SCAN | IDX_PRD_TYPE | 2183K| 3 (0)| --------------------------------------------------------------------------------
And the initial question turned to : why the cost of the inapropriate index is cheaper?
After few minutes of reflection I decided to get the statistics of these two indexes:
select index_name ,distinct_keys ,leaf_blocks from user_indexes where table_name = 'T1'; INDEX_NAME DISTINCT_KEYS LEAF_BLOCKS ---------------- ------------- ----------- T1_PK 32633730 111089 IDX_PRD_TYPE
And finally the initial question turned to be: why the IDX_PRD_TYPE index has no statistics?
The answer is:
- The statistics of T1 table have been voluntarily locked since several months ago
- The IDX_PRD_TYPE index has been very recently created and implemented in Production
The combination of the above two points resulted in a newly created index without any statistics making this index appearing very cheap in the CBO eyes and consequently leading to a very bad choice and to a noticeable performance issue.
Obviously gathering the statistics of this index makes it less desirable in favour of the primary key index as the following proves:
exec dbms_stats.unlock_table_stats(user, 't1'); begin dbms_stats.gather_index_stats(user, 'IDX_PRD_TYPE', no_invalidate =>false); end; / exec dbms_stats.unlock_table_stats(user, 't1'); ------------------------------------------------------------------- | Id | Operation | Name | Rows | Cost (%CPU)| ------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 27 | 519 (1)| | 1 | INLIST ITERATOR | | | | |* 2 | TABLE ACCESS BY INDEX ROWID| T1 | 27 | 519 (1)| |* 3 | INDEX UNIQUE SCAN | T1_PK | 398 | 399 (0)| ------------------------------------------------------------------- -------------------------------------------------------------------------------- | Id | Operation | Name | Rows |Cost (%CPU)| -------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 27 | 137K (2| |* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| T1 | 27 | 137K (2)| |* 2 | INDEX RANGE SCAN | IDX_PRD_TYPE | 2183K| 5678 (1)| --------------------------------------------------------------------------------
Bottom Line
All things being equal when you decide to lock the statistics of one your tables then bear in mind that any newly created index on this table will not have any statistics. This index will the be, naturally, much more desirable than any other index created before the lock of the table statistics resulting, fairlly likely, into wrong index choice and bad execution plans. If you are in a similar situation you had then better to manually gather statistics of your new indexes as long as the statistics of your table are kept locked.