Mohamed Houri’s Oracle Notes

September 18, 2015

Basic versus OLTP compression

Filed under: Oracle — hourim @ 11:53 am

If you are going to archive a table having a very large amount of data from a running system into a historical table and you expect from this archived table to be read-only then you have better to use the free BASIC compression mode instead of the paid OLTP mode. This article aims to show you the reason for this compression mode preference based on real life example.

In order to take advantage from the free BASIC compression mode you need first to insert data using a direct path load (this is why this compression mode used to be known as compression for direct path load operations or bulk load operations) and ensure that there is no trigger or integrity constraint on the inserted table that will make Oracle silently ignoring your requested direct path load mode. In passing, in contrast to the preceding Oracle releases, 12.1.0.2 Oracle database will inform you when a direct path is ignored via the Note at the bottom of the corresponding insert execution plan:

Note
-----
   - PDML disabled because triggers are defined
   - Direct Load disabled because triggers are defined

Here’s an example of BASIC compression I’ve executed on a 12.1.0.2 database?

SELECT  
    t.owner,
    t.table_name,
    t.compress_for,
    s.bytes/1024/1024/1024 GB
FROM 
    dba_tables t ,
    dba_segments s
WHERE 
    s.segment_name = t.table_name
AND s.owner        = 'XXX'
AND t.table_name   = 'T1'
ORDER BY 4;
   
OWNER      TABLE_NAME   COMPRESS_FOR    GB
---------- ------------ ------------- ----------
XXX        T1                         263.114136 

263 GB worth of data of which we need to keep only 2 years and archive the rest.

The first step in this process was to create an empty table cloned from the production table and define it to accept the BASIC mode compression:

-- create a table compressed with BASIC mode
CREATE TABLE T1_HIST
      COMPRESS BASIC
      TABLESPACE HIST_TABSPACE 
         AS
      SELECT * FROM T1
      WHERE 1 = 2;   

As far as I was going to send a huge amount of data into this archived table I decided to use a parallel insert which needs to be preceded by enabling parallel DML either by altering the session or by using the appropriate hint (for 12c)

-- enable parallel dml 
SQL> alter session enable parallel dml;        

The way is now paved to start sending historical data into their new destination:

-- direct path load data older than 2 years into the archived table
INSERT /*+ append parallel(bsic,4) */ 
  INTO T1_HIST bsic
  SELECT
          /*+ parallel(oltp,4) */
              *
    FROM  T1 oltp
       WHERE HIST_DAT > add_months(sysdate,-24);

2,234,898,367 rows created.

More than 2 billion of rows direct path loaded.

And the agreeable surprise is:

SQL> SELECT
    t.owner,
    t.table_name,
    t.compress_for,
    s.bytes/1024/1024/1024 GB
FROM
    dba_tables t ,
    dba_segments s
WHERE
    t.compress_for ='BASIC'
AND
    s.segment_name   = t.table_name
AND s.owner       = 'XXX'
ORDER BY 4;

OWNER      TABLE_NAME   COMPRESS_FOR   GB
---------- ------------ ------------- ----------
XXX         T1_HIST     BASIC         53.2504272 

We went from a source table with 263GB worth of data to a cloned table compressed using BASIC mode to end up with only 53GB or historical data.

If you are wondering how much rows I have not send into the archived table then here’s

SQL> select /*+ parallel(8) */ count(1) 
     from T1
     where HIST_DAT <= add_months(sysdate,-24);
   
7,571,098 

This means that using the BASIC free compression mode I have archived almost all rows from the production table and succeeded to pack the initial 263GB into 53GB only. That’s an excellent compression ratio of 5 (we have divided the initial size by 5). Though that Oracle is saying that the compression ration depends on the nature of your data and it could range between a factor of 2x to 4x.

Should you have used the paid OLTP compression mode you would have got an archived table with a size approximatively 10% higher (~60GB). This is due to Oracle considering the table compressed in BASIC mode to be read only and not subject to any update with a default PCT_FREE set to 0 behind the scene:

SQL> select table_name, pct_free
    from dba_tables
    where table_name = 'T1_HIST';

TABLE_NAME    PCT_FREE
----------- ----------
T1_HIST          0 

As you can see, if you intend to archive a huge amount of data into a read only table, want to gain disk space with this operation and you don’t want to pay for the OLTP compression then you can opt for the free BASIC compression mode.

There are few interesting things that come up along with this archiving process. Particularly the new 12c way (HYBRID TSM/HWMB) Oracle is using to keep down the number of new extents and to avoid HV enqueue wait event during parallel direct path load with a high DOP across several instances:

SQL Plan Monitoring Details (Plan Hash Value=3423860299)
====================================================================================
| Id |              Operation               |   Name   |  Rows   |Execs |   Rows   |
|    |                                      |          | (Estim) |      | (Actual) |
====================================================================================
|  0 | INSERT STATEMENT                     |          |         |    5 |        8 |
|  1 |   PX COORDINATOR                     |          |         |    5 |        8 |
|  2 |    PX SEND QC (RANDOM)               | :TQ10000 |    819M |    4 |        8 |
|  3 |     LOAD AS SELECT (HYBRID TSM/HWMB) |          |         |    4 |        8 |
|  4 |      OPTIMIZER STATISTICS GATHERING  |          |    819M |    4 |       2G |
|  5 |       PX BLOCK ITERATOR              |          |    819M |    4 |       2G |
|  6 |        TABLE ACCESS FULL             | T1       |    819M |  442 |       2G |
====================================================================================

Predicate Information (identified by operation id):
---------------------------------------------------
 
   6 - access(:Z>=:Z AND :Z<=:Z) filter("HIST_DAT">ADD_MONTHS(SYSDATE@!,-24))

Note
-----
   - Degree of Parallelism is 4 because of table property

But this will be detailed in a separate blog article.

Advertisements

6 Comments »

  1. Mohamed
    That’s a good introduction to basic compression. However, it would be worth describing an other limitation of the basic compression over oltp: what happen if the model is evolving and say you need to change the type of a column or add a column to the table?

    Comment by loiclefevre — September 19, 2015 @ 8:23 am | Reply

    • Loic,

      I have made attention to emphasize that BASIC compression is for read only tables and for direct path load inserts only. Those are two important limitations of BASIC over OLTP mode. Isn’t it?

      Your suggestion of table evolution that needs to be reflected in the archived BASIC compressed table during the next “direct path load” is an excellent idea to test and come up with observations that might help me and others when taking the decision of compression mode choice OLTP versus BASIC.

      Best regards
      Mohamed

      Comment by hourim — September 19, 2015 @ 2:02 pm | Reply

  2. Mohamed,
    as far as I can say the limitations of OLTP compression are quite … limiting. With updates the OLTP compression does not work at all – as Jonathan Lewis showed in http://allthingsoracle.com/compression-in-oracle-part-3-oltp-compression/. There are also some interessting links in the comments (for example by Randolf Geist who has also written some articles on the limitations).

    Comment by Martin Preiss — September 19, 2015 @ 12:02 pm | Reply

    • Martin,

      I have read the excellent compression series of articles by Jonathan Lewis and related Randolf Geist comments about the limitations that exist for BASIC and OLTP compression mode. But it is worth to have a practical case of update in an OLTP compressed table, preferably in the new 12c release, to check whether those limitations are still present and to measure the impact updates can have on the existing compressed data.

      Best regards
      Mohamed

      Comment by hourim — September 19, 2015 @ 2:12 pm | Reply

  3. Hello, Mohamed.
    I just stumble across your article because of a problem I’m facing: in our environment some tables compressed with BASIC need to be modified. The other DBA who did the compression was not aware of such needs, maybe at the time nobody thought those tables would be changed.

    I see I can easily change its compression mode from BASIC to OLTP with a single command, but I’m not sure of the implications of such process. Do you have some advices? Are there majors impacts for this operation?

    Comment by Leandro — April 26, 2016 @ 1:59 pm | Reply

  4. Leandro,

    Moving from OLTP to BASIC invalidates indexes. So you need to rebuild the ‘unusable’ indexes as well. And when you do such a move for big table you will need to measure the time it took before runing the move in PRODUCTION. Don’t forget also to track partitions which have been compressed in OLTP. If you intend to rebuild indexes in parallel don’t forget to reset the parallel degree of the indexes after the rebuild. BASIC tables will have there pctfree set to 0 which means that you don’t expect to udpate them. If you have the intention to add data in the BASICally compressed table you need to direct path load this data in order to benefit from the BASIC compression.

    And since you are going absolutely to move your table (and rebuild index) I will advise you to check dependent queries for any changed execution plan.

    Best regards

    Comment by hourim — April 26, 2016 @ 2:11 pm | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

Tony's Oracle Tips

Tony Hasler's light hearted approach to learning about Oracle

Richard Foote's Oracle Blog

Focusing Specifically On Oracle Indexes, Database Administration and Some Great Music

Hatem Mahmoud Oracle's blog

Just another Oracle blog : Database topics and techniques

Mohamed Houri’s Oracle Notes

Qui se conçoit bien s’énonce clairement

Oracle Diagnostician

Performance troubleshooting as exact science

Raheel's Blog

Things I have learnt as Oracle DBA

Coskan's Approach to Oracle

What I learned about Oracle

So Many Oracle Manuals, So Little Time

“Books to the ceiling, Books to the sky, My pile of books is a mile high. How I love them! How I need them! I'll have a long beard by the time I read them”—Lobel, Arnold. Whiskers and Rhymes. William Morrow & Co, 1988.

EU Careers info

Your career in the European Union

Carlos Sierra's Tools and Tips

Tools and Tips for Oracle Performance and SQL Tuning

Oracle Scratchpad

Just another Oracle weblog

OraStory

Dominic Brooks on Oracle Performance, Tuning, Data Quality & Sensible Design ... (Now with added Sets Appeal)

%d bloggers like this: