Sunday, April 24, 2011

Delayed Block Cleanout / ORA-01555

Here is a link to a collection of scripts that can be used for some entertainment (well, it probably depends on your personal definition of "entertainment"...) regarding "Delayed Block Cleanout". These scripts are meant to be used in a playground environment - do not attempt to get them close to anything important as they might have some undesirable side-effects. Please read the comments in the file header description before attempting to run them.

The scripts allow to gain some insights into the different variants of delayed block cleanout and how to force an ORA-01555 error caused by it. The scripts among others allow to demonstrate that blocks generated via direct-path inserts are not "clean" and subject to a special kind of delayed block cleanout and therefore - although unlikely - can still cause an ORA-01555 error due to delayed block cleanout. This has been discussed recently here.

It is amazing how much effort has Oracle put into minimizing the overheads of storing the lock information in the block and still offering fast commits:

- There are fast cleanouts for small transactions that leave blocks in a "clean enough" state (which means that only the commit SCN of the ITL slot is updated but everything else is left behind for others to clean up) but do not generate redo although they modify a block. The block will be dirty and written (possibly again) by DBWR

- There are "immediate" (as part of consistent / current mode gets) and "deferred" (piggy-back as part of subsequent block modifications) delayed block cleanouts for blocks left behind by larger transactions that will tidy up the block including any lock bytes and ITL slot information. These will generate redo and dirty blocks.

- There is a special treatment of blocks that are generated by direct-path inserts that do not need to be cleaned up as blocks modified by conventional DML but still miss the "commit SCN" information from the ITL slots. Only a single block will be cleaned out (small amount of redo and block dirtied) and the "commit SCN" obtained will be "cached" in the session for further blocks accessed by this session

For further reading regarding the topic I recommend these links as starting points:

- "Clean it up" by Jonathan Lewis
- "Impact of Direct Reads on Delayed Block Cleanouts" by Christian Antognini

Some final food for thought regarding delayed block cleanout:

- Direct path reads performed by parallel execution slaves
- Adaptive serial direct reads introduced in Oracle 11g
- Readonly tablespaces
- Index blocks modified by DML

5 comments:

  1. Hi Randolf,

    While doing some test for HCC (in 19c), I came up to a scenario that may involve blocks cleanouts (that's how I ended up in this page); it would be great if you can share a thought on this:

    0. bigtab has 20M rows (12 randomized columns)
    1. EXEC EXECUTE IMMEDIATE 'drop table t_hcc_archive_high'; EXCEPTION WHEN OTHERS THEN NULL;
    EXEC EXECUTE IMMEDIATE 'drop table t_hcc_archive_high2'; EXCEPTION WHEN OTHERS THEN NULL;
    2. CREATE TABLE t_hcc_archive_high COMPRESS FOR ARCHIVE HIGH AS SELECT * FROM bigtab WHERE 1 = 0;
    CREATE TABLE t_hcc_archive_high2 COMPRESS FOR ARCHIVE HIGH AS SELECT * FROM bigtab WHERE 1 = 0;
    3. INSERT INTO t_hcc_archive_high select * from bigtab; commit;
    --76 minutes to insert 20M rows in empty t_hcc_archive_high
    4. INSERT /*+ append */ INTO t_hcc_archive_high2 select * from bigtab; commit;
    --20 minutes to insert 20M rows in empty t_hcc_archive_high2
    5. INSERT INTO t_hcc_archive_high select * from bigtab; commit;
    --76 minutes to insert 20M rows on top of already existing 20M rows
    6. INSERT INTO t_hcc_archive_high2 select * from bigtab; commit;
    --16 minutes to insert 20M rows on top of already existing 20M rows

    Couldn't find any explanation on why the second conventional insert takes just 16 minutes (no 6); the test was reproduced multiple times and the results are consistent.

    Any hints ?

    Regards

    ReplyDelete
    Replies
    1. Hi,

      many possibilities come to my mind, so I would try to start with the obvious:

      - Capture session statistics for those runs - it should give you a first hint what is different. If you run this on a test environment where you have exclusive access, system / instance statistics should also be fine, so you could simply generate an AWR snapshot before and after if you happen to have a Diagnostics Pack license otherwise use Statspack or Snapper / Mystats / Runstats tools on session level (more reliable) freely available

      - It would be interesting to see if you achieve different compression ratios with these different attempts, I could imagine for example that the quicker runs don't compress the same way the others do - in the past HCC compression was only supported with Direct Path operations, this has changed with 12.2.0.1 where HCC compression is also supported with conventional loads, but I don't have that much experience with that improvement, so not sure how it works in detail

      - If you make use of Auto DOP you also need to consider that some runs might make use of Parallelism others not even if you don't specify any parallelism explicitly

      So as usual more details available will help to explain the different behaviour.

      Regards,
      Randolf

      Delete
    2. Of course that I have all exec details (AWR, ASH, SQL Monitor, snapper stats); trying to get also 10046 for all the tests.

      What I have seen strange in SQL Monitoring report (IO section) when comparing executions from 5) and 6):

      5) Buffer gets: 3.6M
      IO requests: 800K
      I/O bytes: 7.9GB
      Average I/O size: 10.3 KB

      select count(1), session_state, event, CURRENT_OBJ#, CURRENT_FILE# from v$active_session_history where sql_id = '8p0x9uc3udpyj' and sql_exec_id = 16777223
      group by session_state, event, CURRENT_OBJ#, CURRENT_FILE# order by 1 desc

      COUNT(1) SESSION EVENT CURRENT_OBJ# CURRENT_FILE#
      ---------- ------- ---------------------------------------------------------------- ------------ -------------
      2987 ON CPU 3320288 288
      714 ON CPU 0 261
      41 WAITING cell single block physical read 3320288 288
      41 WAITING cell multiblock physical read 3320288 288
      41 WAITING cell list of blocks physical read 3320288 288
      32 ON CPU 3320495 288
      13 WAITING cell list of blocks physical read 0 261
      3 ON CPU 312 259
      2 WAITING cell list of blocks physical read 3320495 288

      6) Buffer gets: 3.4M
      IO requests: 109K
      I/O bytes: 7.4GB
      Average I/O size: 70.9 KB

      select count(1), session_state, event, CURRENT_OBJ#, CURRENT_FILE# from v$active_session_history where sql_id = 'cauajh6txt23u' and sql_exec_id = 16777219
      group by session_state, event, CURRENT_OBJ#, CURRENT_FILE# order by 1 desc

      COUNT(1) SESSION EVENT CURRENT_OBJ# CURRENT_FILE#
      ---------- ------- ---------------------------------------------------------------- ------------ -------------
      894 ON CPU 3320288 288
      21 WAITING cell multiblock physical read 3320288 288
      15 ON CPU 3320496 288
      11 WAITING cell single block physical read 3320288 288
      9 WAITING cell single block physical read 3320496 288
      2 ON CPU 0 261

      What could be the reason to have the avg I/O size higher in the second case (on the table which was populated first with a direct path insert) ?

      Indeed the table populated with direct path insert is smaller:

      SEGMENT_NAME SIZE_MB BLOCKS
      ------------------------ ---------- ----------
      BIGTAB 8692.8125 1112680
      T_HCC_ARCHIVE_HIGH 5120 655360
      T_HCC_ARCHIVE_HIGH2 4864 622592

      Tests were performed on a dev machine having no other workloads.
      Auto DOP is not used.

      Delete
    3. Hi,

      as already outlined I would start with capturing the session statistics before / after the run (snapper can do this in "developer mode", but MYSTATS / RUNSTATS packages do this and allow automatic diff of the statistics) - default snapper isn't that useful since it just captures session statistics for a limited time period without diffing them.

      Regarding your SQL Monitor findings: Obviously the other run does 700K more I/O requests, and these are very likely single block reads which explains why you get a different average I/O size. You could try to find out which blocks these I/O requests are reading (CURRENT_FILE# / CURRENT_BLOCK#), possibly these are ASSM space management blocks and some part of the different behavior might be due to ASSM space management issues. If it was block cleanout related you should see more waits / I/Os on the Undo tablespace but for this to be sure you should enable logical I/O tracing - or check the session statistics as already mentioned.

      Regards,
      Randolf

      Delete
  2. You can find here all the exec stats (along with 10046 tkprof formatted output): https://community.oracle.com/mosc/discussion/comment/16858905#Comment_16858905

    ReplyDelete

Note: Only a member of this blog may post a comment.