Sunday, May 22, 2011

ASSM bug reprise - part 2

Introduction

In the first part of this post I've explained some of the details and underlying reasons of bug 6918210. The most important part of the bug is that it can only be hit if many row migrations happen during a single transaction. However, having excessive row migrations is usually a sign of poor design, so this point probably can't be stressed enough:

If you don't have excessive row migrations the bug can not become significant

Of course, there might be cases where you think you actually have a sound design but due to lack of information about the internal workings it might not be obvious that excessive row migrations could be caused by certain activities.

One popular feature that might cause such trouble is compression. The most important thing that you need to know about compression is this:

Compression and subsequent significant updates do not work very well together

The main reason for this is that Oracle stores any compressed data after an update as uncompressed (although it is still a "compressed" row/block format the data itself is stored uncompressed). Note that this also holds true for the "advanced" OLTP compression option, as we will see shortly.

Given this fact it might become obvious that updates to compressed data can cause excessive row migrations, because:

- The row data stored in uncompressed format will usually require a lot more space than the original compressed format, hence the row will no longer fit into the place of the block where it originally resided. It needs to be moved somewhere else within the same block (and the block might have to be "re-organized" in order to allow for sufficient contiguous space, which adds CPU overhead to the operation), and if there isn't sufficient space in the block available it will have to be migrated to a different block with sufficient space

- By default the "basic" compression option of Oracle implicitly sets the PCTFREE of the block to 0, however you can change this by explicitly defining a PCTFREE. The "OLTP" compression leaves PCTFREE at the default of 10

These PCTFREE settings, in particular the default of 0 used by "basic" compression, do not leave a lot of free space in the block for further row growth, so it becomes clear that without any non-default PCTFREE settings anything that performs updates to more than just a couple of rows per block of compressed data will lead to row migrations.

Of course you'll appreciate at least with "basic" compression that does not attempt to re-compress the blocks any application updating more than a couple of compressed rows must be called bad design, since it uses the feature in a way that it was not intended for.

Things look differently, from a general point of view at least, with "OLTP" compression since it promises to re-compress the data also during conventional DML and therefore should allow subsequent updates without suffering from too many or even excessive row migrations. Unfortunately, I couldn't confirm this in the tests that I've performed.

Basic Compression

So, let's start with a simple variation of the original test case, by introducing "basic" compression and have a look at the results.

By the way, all the tests have been performed using a 8K ASSM tablespace.


set echo on timing on

drop table t1;

purge table t1;

CREATE TABLE t1
(pkey varchar2(1),
v1 varchar2(255),
v2 varchar2(255)
)
compress
pctfree 0
TABLESPACE &&tblspace;

INSERT /*+ append */ INTO t1
SELECT '1' as pkey,
to_char((mod(rownum, 1) + 1), 'TM') || 'BL' AS v1,
'BLUBB' /*null*/ AS v2
FROM dual CONNECT BY LEVEL <= 50000
/

commit;

BEGIN dbms_stats.gather_table_stats(
ownname => null,
tabname => 'T1');
END;
/

create index t1_idx on t1 (substr(v1, 1, 1)) TABLESPACE &tblspace;

SELECT num_rows,blocks FROM user_tables WHERE table_name = 'T1';

truncate table chained_rows;

analyze table t1 list chained rows;

select count(*) from chained_rows;

column file_no new_value file_no
column block_no new_value block_no

select
dbms_rowid.rowid_relative_fno(rowid) as file_no
, dbms_rowid.rowid_block_number(rowid) as block_no
from
t1
where
rownum <= 1;

alter session set tracefile_identifier = 'before_update';

alter system checkpoint;

alter system dump datafile &file_no block &block_no;

pause Press Enter to continue...

exec mystats_pkg.ms_start

-- Counters not updated in 11g
-- execute snap_kcbsw.start_snap

/*
alter session set tracefile_identifier = 'space_layer';

alter session set events '10320 trace name context forever, level 3';
alter session set events '10612 trace name context forever, level 1';
*/

UPDATE /*+ full(t1) */ t1 SET v2 = v1
where substr(v1, 1, 1) = '1';

/*
alter session set events '10320 trace name context off';
alter session set events '10612 trace name context off';
*/

commit;

set serveroutput on size 1000000 format wrapped
set linesize 120
set trimspool on

-- Counters not updated in 11g
-- execute snap_kcbsw.end_snap

exec mystats_pkg.ms_stop(1)

BEGIN dbms_stats.gather_table_stats(
ownname => null,
tabname => 'T1');
END;
/

SELECT num_rows,blocks FROM user_tables WHERE table_name = 'T1';

truncate table chained_rows;

analyze table t1 list chained rows;

select count(*) from chained_rows;

alter session set tracefile_identifier = 'after_update';

alter system checkpoint;

alter system dump datafile &file_no block &block_no;

accept rdba prompt 'Enter forwarding ROWID found in block dump: '

column rdba new_value rdba

-- Remove any potential leading and trailing unnecessary stuff
select
substr('&rdba',
case
when instr('&rdba', '0x') = 0
then 1
else instr('&rdba', '0x') + 2
end,
case
when instr('&rdba', '.') = 0
then 32767
else instr('&rdba', '.') -
case
when instr('&rdba', '0x') = 0
then 0
else instr('&rdba', '0x') + 2
end
end
) as rdba
from
dual
;

select
dbms_utility.data_block_address_file(to_number('&rdba', rpad('X', length('&rdba'), 'X'))) as file_no
, dbms_utility.data_block_address_block(to_number('&rdba', rpad('X', length('&rdba'), 'X'))) as block_no
from
dual
;

alter session set tracefile_identifier = 'migrated_rows';

alter system dump datafile &file_no block &block_no;


If you look closely at the example you'll notice that from an "innocent" application point of view the update will actually shorten the row in size - but of course only if you do not take into account the effects described above about compressed data.

This is what a sample block dump looks like right after the insert:


.
.
.
data_block_dump,data header at 0x551007c
===============
tsiz: 0x1f80
hsiz: 0x5d6
pbl: 0x0551007c
76543210
flag=-0------
ntab=2
nrow=728
frre=-1
fsbo=0x5d6
fseo=0x113e
avsp=0xc
tosp=0xc
r0_9ir2=0x0
mec_kdbh9ir2=0x1
76543210
shcf_kdbh9ir2=----------
76543210
flag_9ir2=--R----C
fcls_9ir2[4]={ 0 32768 32768 32768 }
0x1e:pti[0] nrow=1 offs=0
0x22:pti[1] nrow=727 offs=1
0x26:pri[0] offs=0x1f71
0x28:pri[1] offs=0x1f6c
0x2a:pri[2] offs=0x1f67
0x2c:pri[3] offs=0x1f62
.
.
.
block_row_dump:
tab 0, row 0, @0x1f71
tl: 15 fb: --H-FL-- lb: 0x0 cc: 3
col 0: [ 1] 31
col 1: [ 3] 31 42 4c
col 2: [ 5] 42 4c 55 42 42
bindmp: 02 d7 03 c9 31 cb 31 42 4c cd 42 4c 55 42 42
tab 1, row 0, @0x1f6c
tl: 5 fb: --H-FL-- lb: 0x0 cc: 3
col 0: [ 1] 31
col 1: [ 3] 31 42 4c
col 2: [ 5] 42 4c 55 42 42
bindmp: 2c 00 01 03 00
tab 1, row 1, @0x1f67
tl: 5 fb: --H-FL-- lb: 0x0 cc: 3
col 0: [ 1] 31
col 1: [ 3] 31 42 4c
col 2: [ 5] 42 4c 55 42 42
bindmp: 2c 00 01 03 00
.
.
.


You'll notice that the rows have been compressed very efficiently (see the "symbol table" as "tab 0") and are just reported to allocate 5 bytes each.

It is quite interesting to note that the rows are actually stored with no gap in between, without compression usually Oracle will allocate at least 9 bytes per row (the minimum row size required for migrated rows) even if the actual row is less than 9 bytes in length. I'm not sure why Oracle does this since it means even more work in case of row migrations. Since Oracle doesn't store more rows in the block than dictated by the minimum row size of 9 bytes even with compression enabled this effect is a bit puzzling. You can tell that from the block header - although there is still physically free space in the block (free space begin offset: fsbo=0x5d6, free space end offset: fseo=0x113e but available space: avsp=0xc) it is reported with almost no space available.

In contrast this is what a block looks like where uncompressed very small rows are stored:


.
.
.
data_block_dump,data header at 0x610a07c
===============
tsiz: 0x1f80
hsiz: 0x5c6
pbl: 0x0610a07c
76543210
flag=--------
ntab=1
nrow=730
frre=-1
fsbo=0x5c6
fseo=0x5d6
avsp=0x10
tosp=0x10
0xe:pti[0] nrow=730 offs=0
0x12:pri[0] offs=0x1f7b
0x14:pri[1] offs=0x1f72
0x16:pri[2] offs=0x1f69
0x18:pri[3] offs=0x1f60
0x1a:pri[4] offs=0x1f57
.
.
.
block_row_dump:
tab 0, row 0, @0x1f7b
tl: 5 fb: --H-FL-- lb: 0x0 cc: 1
col 0: [ 1] 31
tab 0, row 1, @0x1f72
tl: 5 fb: --H-FL-- lb: 0x0 cc: 1
col 0: [ 1] 31
tab 0, row 2, @0x1f69
tl: 5 fb: --H-FL-- lb: 0x0 cc: 1
col 0: [ 1] 31
tab 0, row 3, @0x1f60
tl: 5 fb: --H-FL-- lb: 0x0 cc: 1
col 0: [ 1] 31
tab 0, row 4, @0x1f57
.
.
.


Although the rows effectively require less than 9 bytes the gap between each row is exactly 9 bytes and the available space corresponds to the free space offsets (fsbo=0x5c6, fseo=0x5d6 and avsp=0x10).

Back to our "basic" compression example: Of course we know that the update decompresses the affected rows which means that they will allocate more than these 5 bytes and hence eventually will cause row migrations. Since the decompressed row size is still very small and below the limits outlined in part 1 we'll run again into the basic bug - the blocks holding the migrated rows can take more rows than ITL entries are available leading to the "free space search" anomaly.

OLTP Compression

Let's change the script and use OLTP compression instead. Additionally we'll give Oracle a little room to manoeuvre so that it should be able to store an uncompressed row and re-compress it when required. I do this by creating and populating the table with PCTFREE 10 but switching to PCTFREE 0 right before the update. Since we do not expect that many row migrations with OLTP compression the bug shouldn't be a problem.


.
.
.
CREATE TABLE t1
(pkey varchar2(1),
v1 varchar2(255),
v2 varchar2(255)
)
compress for all operations
pctfree 10
TABLESPACE &&tblspace;
.
.
.
alter table t1 pctfree 0;

UPDATE /*+ full(t1) */ t1 SET v2 = v1
where substr(v1, 1, 1) = '1';
.
.
.


So the expectation according to the general description of how OLTP compression works would be that whenever Oracle hits the PCTFREE limit it attempts to re-compress the block. Actually a more accurate description is that the compression is supposed to take place whenever Oracle re-organizes the block to coalesce free space in the block to maximize the contiguous chunks of available space. Oracle 10g added a statistic for this called "heap block compress", see e.g. Jonathan Lewis' blog post about the heap block compression.

But if you run this script in 11.1.0.7 base release, the results will not be as expected: The table has the same size afterwards as with "basic" compression, and we apparently still hit the bug - the update still takes millions of session logical reads. In fact, looking at the block dumps and statistics it looks like that no HSC compression has taken place at all, although we see a lot of occurrences of "heap block compress". You can confirm this by looking for statistics beginning with "HSC..." after the update - apart from the "HSC Heap/Compressed Segment Block Changes" no other HSC related statistics show up.

If you follow one of the migrated rows, you'll see that even the rows that have been migrated to a new block are not compressed (and not even stored in compressed block format) - so we hit again the same basic "no ITLs left but still free space" problem in the blocks holding the migrated rows.

If you run the same script in 11.2.0.1 or 11.2.0.2 the results look slightly different: The original blocks are still not re-compressed, so we seem to end up with the same number of migrated rows, but at least the blocks holding the migrated rows get compressed and therefore the resulting segment will be smaller in size. It also interesting to note that the number of ITL entries in compressed blocks seems to be limited to 20 - probably controlled by the parameter "_limit_itl" as Jonathan Lewis recently has found out.

OLTP Compression Block Re-Compression

I have to admit that I really had a hard time convincing Oracle in any of the versions tested to perform a re-compression of a block that wasn't manipulated by inserts (or inserts caused by row migrations). In fact this is the only circumstance where I could see the HSC compression effectively work. Most importantly it doesn't seem to work in my cases when updating rows that grow in size - leading exactly to our problem: Excessive row migrations when updating many rows.

I've tried different variations of DML patterns (small / large transactions, mixtures of insert, deletes, updates etc. among those the most straightforward one - updating a column to the same value which should make re-compression a "no-brainer"), and to me it looks like that OLTP re-compression attempts are only triggered by inserts into a block, updates never seem to trigger a re-compression.

Oracle performs a row migration rather than attempting to re-compress the block. It might be a deliberate design decision since the OLTP compression, apart from the obvious CPU overhead, also seems to be write a full pre-compression block image to undo/redo, since the changes done to the block by the compression apparently can not be represented by discrete change vectors. Therefore minimizing the compression attempts by simply migrating rows when necessary could be a deliberate choice - the undo/redo overhead of the OLTP compression can be significant.

Also, since it is called "OLTP" compression, one could assume that a OLTP workload consists of a general mixture of inserts, updates and deletes and therefore not compressing on updates shouldn't be too relevant. This seems also to be confirmed when reading this section of the "Concepts" guide which could be interpreted that only inserts trigger a re-compression of a block.

ASSM Bug variation #1

Of course, things could be worse, and that's what exactly is going to happen with OLTP compression in 11.1.0.7 base release: Increasing the row length or PCTFREE should prevent the basic bug from happening even with excessive row migrations. So let's repeat the test case, this time using the trick we used in part 1 to prevent the bug: Setting PCTFREE at 50 right before the update - this will mark the blocks with the migrated rows as full long before the ITL slots can become depleted.


.
.
.
alter table t1 pctfree 50;

UPDATE /*+ full(t1) */ t1 SET v2 = v1
where substr(v1, 1, 1) = '1';
.
.
.


But look at the results:


.
.
.
50000 rows updated.

Elapsed: 00:00:24.16.
.
.
STAT..session logical reads 9,641,591
.
.
.


Even worse than before - even longer runtime, even more "session logical reads" and more CPU consumption.

Looking at the blocks holding the migrated rows we can clearly tell that they don't meet the prerequisites for the basic "ITL is full but free space left" problem - so something else must be going on leading to similar symptoms.


.
.
.
Itl Xid Uba Flag Lck Scn/Fsc
0x01 0x0002.01b.00000767 0x00c015bc.0298.21 --U- 89 fsc 0x0000.002641bf
0x02 0x0000.000.00000000 0x00000000.0000.00 ---- 0 fsc 0x0000.00000000
.
.
.
0x5a 0x0000.000.00000000 0x00000000.0000.00 C--- 0 scn 0x0000.00000000
0x5b 0x0000.000.00000000 0x00000000.0000.00 C--- 0 scn 0x0000.00000000
bdba: 0x024000ea
data_block_dump,data header at 0x2ea08bc
===============
tsiz: 0x1740
hsiz: 0xc4
pbl: 0x02ea08bc
76543210
flag=--------
ntab=1
nrow=89
frre=-1
fsbo=0xc4
fseo=0x10a5
avsp=0xfe1
tosp=0xfe1
0xe:pti[0] nrow=89 offs=0
0x12:pri[0] offs=0x172d
0x14:pri[1] offs=0x171a
0x16:pri[2] offs=0x1707
.
.
.


When enabling the debug events 10320 and 10612 you'll notice that there is a similar pattern of "...didnot like..." messages as with the basic bug, but in addition there are other entries that don't show up in the basic case: "Compressable Block...". To me it looks like Oracle actually considers these blocks as candidates for further inserts if they were compressed, but somehow this compression never happens, so the blocks are left in that state and are considered and rejected for inserts over and over again leading to the same symptoms as for the basic bug, but obviously for different reasons.


.
.
.
KDTGSP: seg:0x2400083 mark_full:0 pdba:0x02400085
kdtgsp: calling ktspgsp_cbk w/ kdt_bseg_srch_cbk() on 73997
Enter cbk---------------
[ktspisc]: Rejected: BlockDBA:0x02400085
ktspfsrch: Returns: BlockDBA:0x024000e8
kdt_bseg_srch_cbk: examine dba=10.0x024000e8
kdt_bseg_srch_cbk:Compressable Block dba=10.0x024000e8 avs=832 afs=0 tosp=832 full=0
kdt_bseg_srch_cbk: failed dba=10.0x024000e8 avs=832 afs=0 tosp=832 full=0
ktspfsrch:Cbk didnot like 0x024000e8
ktspfsrch: Returns: BlockDBA:0x024000eb
kdt_bseg_srch_cbk: examine dba=10.0x024000eb
kdt_bseg_srch_cbk:Compressable Block dba=10.0x024000eb avs=832 afs=0 tosp=832 full=0
kdt_bseg_srch_cbk: failed dba=10.0x024000eb avs=832 afs=0 tosp=832 full=0
ktspfsrch:Cbk didnot like 0x024000eb
ktspfsrch: Returns: BlockDBA:0x024000ee
kdt_bseg_srch_cbk: examine dba=10.0x024000ee
kdt_bseg_srch_cbk:Compressable Block dba=10.0x024000ee avs=832 afs=0 tosp=832 full=0
kdt_bseg_srch_cbk: failed dba=10.0x024000ee avs=832 afs=0 tosp=832 full=0
ktspfsrch:Cbk didnot like 0x024000ee
ktspfsrch: Returns: BlockDBA:0x024000f1
kdt_bseg_srch_cbk: examine dba=10.0x024000f1
kdt_bseg_srch_cbk: found dba=10.0x024000f1 avs=3511 afs=0 tosp=3511 full=1
Exit cbk ------
ndba:0x024000f1
.
.
.


This assumption is actually confirmed in various bug descriptions on My Oracle Support. In particular bugs 9341448, 6009614, 9667930, 9708484 and 9667930 seem to apply. Note that there seems to be another related bug 8287680 for inserts. Also document 1101900.1 applies. According to these bug descriptions this variation of the bug is actually fixed in some patch sets (as opposed to the "basic" bug in part 1 that is only fixed in the 11.2 release) - and also a one-off patch (9667930) seems to be available.

It is also interesting to note that this update anomaly is also documented in the "Master Note for OLTP Compression [ID 1223705.1]" on My Oracle Support - search for "Test #5" and review the runtimes mentioned in the table for that test, Scenario #3 / #5.

Note that this problem does not show up if the test case gets repeated with basic compression instead - it will not attempt to re-compress the blocks and therefore can only be affected by the basic variation of the bug.

ASSM Bug variation #2

But, it is not over yet - finally we come to the oddest variation of the bug in 11.1.0.7 base release: Combining basic compression with partitioning can lead to the same symptoms as with the just outlined "re-compression" bug with OLTP compression. Oracle apparently sometimes (it is not always reproducible) starts to consider blocks for re-compression with the same dire results although basic compression gets used.

Again a slight variation of the script gets used: I'll use basic compression with PCTFREE 10 but introduce a very simple partitioning schema. Also the V2 column gets updated to the same value - all this is done to prevent running into the basic bug because the migrated rows will be too large for that.


set echo on timing on

drop table t1;

purge table t1;

CREATE TABLE t1
(pkey varchar2(1),
v1 varchar2(255),
v2 varchar2(255)
)
partition by range(pkey)
(
partition p1 values less than (2),
partition p2 values less than (3)
)
compress
pctfree 10
TABLESPACE &&tblspace;

INSERT /*+ append */ INTO t1
SELECT '1' as pkey,
to_char((mod(rownum, 1) + 1), 'TM') || 'BL' AS v1,
'BLUBB' /*null*/ AS v2
FROM dual CONNECT BY LEVEL <= 50000
/

commit;

BEGIN dbms_stats.gather_table_stats(
ownname => null,
tabname => 'T1');
END;
/

create index t1_idx on t1 (substr(v1, 1, 1)) TABLESPACE &tblspace;

SELECT num_rows,blocks FROM user_tables WHERE table_name = 'T1';

truncate table chained_rows;

analyze table t1 list chained rows;

select count(*) from chained_rows;

column file_no new_value file_no
column block_no new_value block_no

select
dbms_rowid.rowid_relative_fno(rowid) as file_no
, dbms_rowid.rowid_block_number(rowid) as block_no
from
t1
where
rownum <= 1;

alter session set tracefile_identifier = 'before_update';

alter system checkpoint;

alter system dump datafile &file_no block &block_no;

pause Press Enter to continue...

exec mystats_pkg.ms_start

-- Counters not updated in 11g
-- execute snap_kcbsw.start_snap

/* This will generate a huge tracefile
alter session set tracefile_identifier = 'space_layer';

alter session set events '10320 trace name context forever, level 3';
alter session set events '10612 trace name context forever, level 1';
*/

-- Uncomment this to prevent the bug
-- alter system flush shared_pool;

UPDATE /*+ full(t1) */ t1 SET v2 = v2
where substr(v1, 1, 1) = '1';

/*
alter session set events '10320 trace name context off';
alter session set events '10612 trace name context off';
*/

commit;

set serveroutput on size 1000000 format wrapped
set linesize 120
set trimspool on

-- Counters not updated in 11g
-- execute snap_kcbsw.end_snap

exec mystats_pkg.ms_stop(1)

BEGIN dbms_stats.gather_table_stats(
ownname => null,
tabname => 'T1');
END;
/

SELECT num_rows,blocks FROM user_tables WHERE table_name = 'T1';

truncate table chained_rows;

analyze table t1 list chained rows;

select count(*) from chained_rows;

alter session set tracefile_identifier = 'after_update';

alter system checkpoint;

alter system dump datafile &file_no block &block_no;

accept rdba prompt 'Enter forwarding ROWID found in block dump: '

column rdba new_value rdba

-- Remove any potential leading and trailing unnecessary stuff
select
substr('&rdba',
case
when instr('&rdba', '0x') = 0
then 1
else instr('&rdba', '0x') + 2
end,
case
when instr('&rdba', '.') = 0
then 32767
else instr('&rdba', '.') -
case
when instr('&rdba', '0x') = 0
then 0
else instr('&rdba', '0x') + 2
end
end
) as rdba
from
dual
;

select
dbms_utility.data_block_address_file(to_number('&rdba', rpad('X', length('&rdba'), 'X'))) as file_no
, dbms_utility.data_block_address_block(to_number('&rdba', rpad('X', length('&rdba'), 'X'))) as block_no
from
dual
;

alter session set tracefile_identifier = 'migrated_rows';

alter system dump datafile &file_no block &block_no;


Now if you run this script under 11.1.0.7 (base release) for several times, you might be able to spot similar symptoms: The updates takes suddenly very long with the same diagnosis output - in particular the 10320 and 10612 debug events show the same "Compressable Block" lines. Although we don't use advanced compression!

Interestingly the problem in this case can be prevented by invalidating information that is cached in the Shared Pool: ALTERing the table right before the update or brute force flushing the Shared Pool will prevent the problem from showing up. By clearing this information the confusion about re-compressing the blocks seems not to pop up. I have to admit however that I don't have a sound explanation why this invalidation of Shared Pool contents prevents this variation of the bug.

Summary

- Mixing compression with a significant number of updates is a bad idea in general

- OLTP block re-compression seems only to be triggered with inserts - updates will lead to row migrations rather than block re-compression

- Due to that behaviour you can end up with more row migrations than assumed even with OLTP compression

- In 11.1.0.7 (base release) something is going wrong with the re-compression attempts of OLTP compression leading to dire results when row migrations happen and an ASSM tablespace gets used. This is a documented bug, a good starting point is document 1101900.1.

- Using basic compression with partitioning can sometimes lead to the same symptoms as described in those bugs for OLTP compression

You should be very careful when using OLTP compression in release 11.1 and monitor the row migration rate - the database might have to work much harder than expected. You should certainly consider applying the one-off patch 9667930 or a patch set that contains the mentioned bug fixes if you plan to use OLTP compression in 11.1 (and ASSM).

Note that I could not reproduce any of these oddities in 11.2 - apart from the fact that updates never seem to trigger a re-compression. However, some of the bug descriptions found seem to suggest that 11.2 could be affected by some of the mentioned bugs as well.

3 comments:

  1. Remarkable work, not less.

    ReplyDelete
  2. Great article Randolf...thanks.

    The concepts manual link is no longer working - I think it's now: http://docs.oracle.com/cd/E11882_01/server.112/e40540/logical.htm#CNCPT1054

    Is it possible that we need to update every column on the row before Oracle will consider the row to be a completely uncompressed row and then consider it for recompression via OLTP compression?

    Jeff

    ReplyDelete
  3. Hi Jeff,

    yeah, I don't even attempt to update any older references to Oracle documentation - they are changing way too often.

    You don't need to update every column - in fact any update to a single column will uncompress the whole row. Re-compression of the block will only be considered in case of inserts into the block.

    I think the main reason for this strategy is the excessive undo/redo generation in case of compression since the whole block needs to be logged - there are no discrete change vectors describing the modifications to the block performed as part of the compression.

    I know that Oracle has some bugs recorded dealing with the update / uncompress issue with OLTP compression, but I haven't had the time yet to figure out if the behaviour significantly changed in very recent versions like 12.1.0.x.

    Randolf

    ReplyDelete