Saturday, July 24, 2010

Double Trouble

One of my clients runs a Data Warehouse with several important data loading and transformation procedures.

One of those key procedures had to be executed in a kind of a loop since many of the transformations steps depend on each other's result.

So although each transformation step was executed in an rather efficient way using Oracle best-practices like parallel DML / exchange partition still the iterative approach meant that the underlying server wasn't used as efficiently as possible since many of those transformations steps also included smaller data sets that didn't use all of the resources available.

In order to address that they've recently introduced a more sophisticated version of that key procedure that uses some clever algorithm based on a dependency tree to determine which steps could be executed in parallel since they do not depend on each other.

Now as part of that optimization they've worked out that on their particular hardware they could run as much as 25 transformation steps in parallel and furthermore up to four of those 25 as parallel execution steps.

So you end up in principle with two queues - one is dequeued by four consumers processing "big" slices via parallel execution and a second queue is dequeued by approx. 20 consumers each processing "small" slices using serial execution (It is quite a big "iron" hardware).

This allowed them to increase the resource utilization of the available server hardware and resulted in a significant drop in overall runtime - actually they saved more than 50% of the original runtime which is quite a good result given that they have achieved that on exactly the same hardware!

So far, so very good - now they got an even "bigger" server with newer CPUs and ran the same database with the same configuration, the only change were the CPUs (which required also a newer version of the operating system, but that is a different story). Since most of the processes really use a lot of CPU (due to the design of the system and therefore running them in parallel makes good usage of all the available CPU power since I/O is not the only constraining factor) they expected a further drop in runtime but to their surprise the key procedure they tested took far longer than on the old hardware.

An analysis showed a really interesting case: It turned out that one of the "weak" points of the system which are rather inefficient user-defined PL/SQL functions that perform recursive SQL were going sideways in their performance and required almost hundred times longer per execution than before.

The explanation was rather stunning: Although no change to the database and in particular no code change to the user defined functions was introduced these recursive queries issued by the user-defined functions suddenly started to be executed in parallel using parallel query - which is a rather bad idea when getting executed 20 million times within an hour or so.

The interesting side effect was that on the operating system side it could be monitored that the kernel time increased to more than 50% of the available CPU time bringing the whole system more or less to a grinding halt.

Now the big question was: How could this have happened? This has never happened before on any of the previous systems but the same code / database on the new system showed this weird behaviour.

The first surprising finding was that the recursive queries showed a parallel execution plan when explaining a plan for them - simply due to the fact that the underlying tables and indexes were marked as parallel. This was an obvious bug in the code that seemed to be there from the beginning but hadn't surfaced yet.

So the question became more: Why had this not happened all the time in the past?

Looking further it turned out that the new CPUs resulted in a doubled CPU_COUNT reported by the operating system to Oracle - and they've used a DEFAULT parallel degree for their parallel executions.

This resulted obviously in a doubled degree of parallelism (DOP) of the parallel executions, but they still used the same instance configuration - in particular the same PARALLEL_MAX_SERVERS setting.

So what happened was that using the same approach as described above (using four consumers for the "big" queue) they simply ran out of parallel servers resulting in severe downgrades for some of the intended parallel executions - and some of them were even downgraded to serial.

And here these two issues together allowed the bug above to surface: Once the parallel execution was downgraded to serial the recursive query was eligible to be executed in parallel. Since the user-defined functions were marked with "PARALLEL_ENABLE" they usually were pushed into the parallel slaves and got executed there. Oracle recognizes this execution in the slaves and prevents a recursive parallel execution implicitly - you could call this a limitation or restriction, but in this case it prevented the bug from showing up.

So, in summary the following happened:

- In the past the old implementation ran at most one parallel execution at the same time, so due to the restrictions enforced via the Resource Manager for other sessions no serial downgrade was possible, so the recursive query was pushed into the slaves and executed serially there - no problems so far

- A new implementation was introduced that used a clever queuing approach. As part of that queuing multiple parallel executions happened simultaneously, but due to the carefully chosen settings these were not downgraded, at least not to serial and the recursive query ran as intended in the parallel slaves - still no problems

- A new hardware was introduced, doubling the CPU_COUNT and therefore doubling the default parallel degree, and bingo: Using the same carefully chosen settings parallel executions were downgraded to serial due to the doubled DOP, and the recursive query eventually started to be executed in parallel - the performance went south

After fixing the issue in the code by explicitly forcing serial execution of the recursive query even with the unadjusted settings causing serial downgrades the problem didn't show up anymore.

What can be taken away from this?

- Oracle treats things in parallel slaves differently from things executed in normal sessions

- Using a DEFAULT parallel degree can lead to interesting effects when modifying the underlying hardware / configuration

- Even if your code ran without problems for years it doesn't mean there isn't a bug waiting for you

Just for curiosity, here is a generic testcase that can be used to reproduce the issue at will:


-----------------------------
-- The complete testcase
-----------------------------

-- Tested with the following settings:
-- NO RAC
-- PARALLEL_MAX_SERVERS = 4
-- PARALLEL_ADAPTIVE_MULTI_USER = TRUE
-- PARALLEL_AUTOMATIC_TUNING = FALSE
-- If above parameters are different the setup needs to adjusted

-----------------------------
-- The setup
-----------------------------

alter system set parallel_max_servers = 4 parallel_adaptive_multi_user = true scope = memory;

drop package pkg_recursive_query;

drop table t_parallel_index purge;

drop table t_driver purge;

create table t_parallel_index (
id integer not null,
large_vc varchar2(200),
pkey integer not null
)
partition by range (pkey)
(
partition pkey_0 values less than (1),
partition pkey_1 values less than (2),
partition pkey_2 values less than (3)
);

create index t_parallel_index_idx on t_parallel_index (id) local parallel 2;

exec dbms_random.seed(0)

insert into t_parallel_index (
id
, large_vc
, pkey
)
select
rownum as id
, dbms_random.string('X', 100)
, 1
from
dual
connect by
level <= 10000;

commit;

insert into t_parallel_index (
id
, large_vc
, pkey
)
select
rownum as id
, dbms_random.string('X', 100)
, 2
from
dual
connect by
level <= 10000;

commit;

exec dbms_stats.gather_table_stats(null, 't_parallel_index')

explain plan for
select
large_vc
from
t_parallel_index
where
id = 2;

-- This is going to be a parallel index range scan by default
select * from table(dbms_xplan.display);

create table t_driver (
id integer not null,
pkey integer not null
);

alter table t_driver add constraint pk_t_driver primary key (id);

insert into t_driver (
id
, pkey
) values (
100
, 1
);

insert into t_driver (
id
, pkey
) values (
200
, 2
);

commit;

exec dbms_stats.gather_table_stats(null, 't_driver')

explain plan for
select /*+ opt_param('parallel_execution_enabled', 'true') */
f.large_vc
from
t_driver d
, t_parallel_index f
where
d.id = 100
and f.pkey = d.pkey
and f.id = 2;

-- This is again going to be a parallel index range scan by default
select * from table(dbms_xplan.display);

-- The user-defined function performing recursive SQL
create or replace package pkg_recursive_query
as
function get_value(p_f_id in integer, p_d_id in integer) return varchar2 parallel_enable;
end pkg_recursive_query;
/

create or replace package body pkg_recursive_query
as
function get_value(p_f_id in integer, p_d_id in integer) return varchar2 parallel_enable
as
s_value varchar2(200);
begin
select /*+ opt_param('parallel_execution_enabled', 'true') */
/* no_parallel_index(f) */
f.large_vc
into
s_value
from
t_driver d
, t_parallel_index f
where
d.id = p_d_id
and f.pkey = d.pkey
and f.id = p_f_id;

return 'SID: ' || sys_context('USERENV', 'SID') || '|' || s_value;
exception
when NO_DATA_FOUND then
s_value := 'SID: ' || sys_context('USERENV', 'SID') || '|<NO DATA FOUND>' ;
return s_value;
end get_value;
end pkg_recursive_query;
/

-----------------------------
-- The steps to perform
-----------------------------

-- First session
-- The main query is executed in parallel
-- Due to PARALLEL_ENABLE the recursive query is pushed into the parallel slaves
-- Being executed in the slaves forces serial execution of the recursive query
-- Keep the result set open to allocate the parallel slaves
set echo on pause on arraysize 1 pagesize 3 linesize 120

column result format a50 truncated

select /*+ parallel(t, 2) full(t) */
pkg_recursive_query.get_value(t.id, 100) as result
from
t_parallel_index t
where
id <= 50;

-- Second session
-- Simply allocate the second set of parallel slaves
-- Now all parallel slaves are allocated
-- Any further request for parallel execution will be downgraded to serial
set echo on

variable c refcursor

begin
open :c for '
select /*+ parallel(t, 2) full(t) */
*
from
t_parallel_index t
where
id <= 50
';
end;
/

-- Third session
-- The main query is now downgraded to serial
-- Now that the main query is serialized the recursive query potentially can get executed in parallel
-- But no parallel slaves are available so it shares the serial execution with the existing child cursor in the library cache
-- Keep the result set open
set echo on pause on arraysize 1 pagesize 3 linesize 120

column result format a50 truncated

select /*+ parallel(t, 2) full(t) */
pkg_recursive_query.get_value(t.id, 100) as result
from
t_parallel_index t
where
id <= 50;

-- Fourth session forces invalidation of child cursor for recursive query
alter table t_parallel_index truncate partition pkey_0;

-- Now close the second session to free parallel slaves
-- Further executions of the recursive subquery of session 3 will be performed in parallel
-- as long as parallel slaves are available otherwise it will be downgraded to serial

Sunday, July 11, 2010

Compression Restrictions

When using table or index compression certain restrictions apply. Since I find it always hard to gather that information from the manuals and also some of the restrictions are not documented properly or not at all, I'll attempt to summarize the restrictions that I'm aware of here:

1. Compression attribute on subpartitions
Note that you can't define the COMPRESSION on subpartition level - the subpartitions inherit this attribute from the parent partition.

It is however perfectly valid and legal to exchange a subpartition with a compressed table and that way introduce individual compression on subpartition level. It is unclear why this is only possible using this indirect method - it simply looks like a implementation restriction. Since compressed data is most suitable for data warehouse environments and these ought to use EXCHANGE [SUB]PARTITION technology anyway this restriction is probably not that crucial.

2. Number of columns
Basic heap table compression and the new OLTP (in 11.1 called "for all operations") compression are both limited to 255 columns. If you attempt to compress a table with more than 255 columns then no compression will be performed although you don't get an error message. Interestingly the COMPRESSION column in the dictionary still shows "ENABLED" which is certainly misleading.

3. DDL Restrictions - modification of table structure
As soon as a segment contains compressed data or has been marked for compression (does not apply in all cases, see below for more information), the following restrictions regarding the modification of the table structure apply:

a. Basic heap table compression will not prevent the addition of more than 255 columns via ADD COLUMN - but if you update the segment the affected blocks will be silently decompressed during the update operation. In 9.2 no schema evolution is possible with compressed segments, see the next paragraph.

b. In Oracle 9.2 a lot of modifications to the table are no longer possible with compression enabled (and this applies to partitioned tables as well if at least one partition is compressed):

- You cannot add or drop any columns, and you even can't set any column unused. The error messages are to say at least confusing, since it tells you something about virtual and object columns, but in fact it is about the enabled compression. These are serious limitations in terms of schema evolution. I'll address below what can be done to overcome this issue.


Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production

SQL> create table t_part
2 compress
3 partition by list (user_id) (
4 partition p_default values (default)
5 )
6 as
7 select * from all_users;

Table created.

SQL> alter table t_part set unused (created);
alter table t_part set unused (created)
*
ERROR at line 1:
ORA-12996: cannot drop system-generated virtual column

SQL> alter table t_part drop unused columns;

Table altered.

SQL> alter table t_part drop column username;
alter table t_part drop column username
*
ERROR at line 1:
ORA-12996: cannot drop system-generated virtual column

SQL> alter table t_part add test_col varchar2(10);
alter table t_part add test_col varchar2(10)
*
ERROR at line 1:
ORA-22856: cannot add columns to object tables

SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';
alter table t_part add test_col2 varchar2(10) default 'BLA'
*
ERROR at line 1:
ORA-22856: cannot add columns to object tables

SQL>


c. In Oracle 10.2 some restrictions have been lifted but there are still a couple of limitations:

- You can now add columns and you can set columns unused but you can't drop columns nor can you add columns with default values

This allows schema evolution but has interesting side effects if you attempt to use for example a Create Table As Select (CTAS) statement to dynamically create a copy of a table and attempt to perform an EXCHANGE PARTITION operation with that newly created table: If there are unused columns in the source table then these won't be copied over to the table created via CTAS and therefore the EXCHANGE PARTITION operation will fail with column mismatch error messages. Since you can't drop the unused columns as long as there is compression enabled on the table (more on that later since it is a bit more complex with partitioned tables), you effectively can't use that approach. In order to perform EXCHANGE PARTITION with compressed tables and evolved schemas that include unused columns you have basically three choices:

- Maintain the "EXCHANGE" table along with the target table which means apply the same DDL in the same order to both tables in order to keep them synchronized

- Extract the hidden columns from the dictionary and create a exchange table with the correct hidden/unused columns which can turn into a complex operation if you need to cover all possible column data types and features including LOBs etc.

- Perform some kind of rebuild operation on the target table to get rid of the unused columns. Some ideas regarding this point are going to follow further below


Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create table t_part
2 compress
3 partition by list (user_id) (
4 partition p_default values (default)
5 )
6 as
7 select * from all_users;

Table created.

SQL> alter table t_part set unused (created);

Table altered.

SQL> alter table t_part drop unused columns;
alter table t_part drop unused columns
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part drop column username;
alter table t_part drop column username
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part add test_col varchar2(10);

Table altered.

SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';
alter table t_part add test_col2 varchar2(10) default 'BLA'
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL>


d. Oracle 11 introduces Advanced Compression which is COMPRESS FOR ALL OPERATIONS in 11.1 or OLTP compression in 11.2 (don't confuse this with HCC compression levels)

Basic compression in 11.1 has the same limitations as 10.2, but with advanced compression (called COMPRESS FOR ALL OPERATIONS in 11.1) all mentioned schema evolution restrictions have been lifted as it can be seen below.


Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create table t_part
2 --compress
3 compress for all operations
4 partition by list (user_id) (
5 partition p_default values (default)
6 )
7 as
8 select * from all_users;

Table created.

SQL> alter table t_part set unused (created);

Table altered.

SQL> alter table t_part drop unused columns;

Table altered.

SQL> alter table t_part drop column username;

Table altered.

SQL> alter table t_part add test_col varchar2(10);

Table altered.

SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';

Table altered.

SQL>


This is more of theoretical nature since you usually won't mix basic with advanced compression, bit things become more interesting when mixing basic and advanced compression in different partitions of the same table:


Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create table t_part
2 compress
3 --compress for all operations
4 partition by list (user_id) (
5 partition p_default values (default)
6 )
7 as
8 select * from all_users;

Table created.

SQL> alter table t_part set unused (created);

Table altered.

SQL> alter table t_part drop unused columns;
alter table t_part drop unused columns
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part drop column username;
alter table t_part drop column username
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part add test_col varchar2(10);

Table altered.

SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';
alter table t_part add test_col2 varchar2(10) default 'BLA'
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part split partition p_default values (123)
2 into (partition p_123 compress for all operations, partition p_default);

Table altered.

SQL> select
2 partition_name
3 , compression
4 , compress_for
5 from
6 user_tab_partitions
7 where
8 table_name = 'T_PART';

PARTITION_NAME COMPRESS COMPRESS_FOR
------------------------------ -------- ------------------
P_DEFAULT ENABLED DIRECT LOAD ONLY
P_123 ENABLED FOR ALL OPERATIONS

SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';

Table altered.

SQL> alter table t_part drop unused columns;
alter table t_part drop unused columns
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part drop column username;
alter table t_part drop column username
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL>


Notice the inconsistent behaviour: The first DDL command after having added a partition with advanced compression succeeds, but the following DDLs fail with an error message.

If the order of these DDLs is changed, then it becomes obvious that always the first DDL is successful no matter which of those failing otherwise comes first.

Furthermore in 11.1.0.7 if you have a mixture of Advanced Compression (COMPRESS FOR ALL OPERATIONS) and NOCOMPRESS partitions still above restrictions apply - only if you have set all partitions to COMPRESS FOR ALL OPERATIONS then the restrictions are lifted which looks more like a bug than a feature.

I'll go into more details below which settings influence this behaviour.

e. Oracle 11.2 introduces Hybrid Columnar Compression (HCC) which is only enabled in conjunction with ExaData

In 11.2 basic compression still has the same restrictions as already mentioned above, and if at least one partition of a partitioned table has advanced compression enabled then some of restrictions are lifted - even with a mixture of basic and advanced compression or a mixture of no compression and advanced compression, however adding a column with a default value is still only supported if all partitions are compressed with Advanced Compression. I'm not sure if this is really a feature since at least mixing Advanced Compression with no compression adding a column with default value should be possible.


Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create table t_part
2 compress
3 --compress for all operations
4 partition by list (user_id) (
5 partition p_default values (default)
6 )
7 as
8 select * from all_users;

Table created.

SQL> alter table t_part set unused (created);

Table altered.

SQL> alter table t_part drop unused columns;
alter table t_part drop unused columns
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part drop column username;
alter table t_part drop column username
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part add test_col varchar2(10);

Table altered.

SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';
alter table t_part add test_col2 varchar2(10) default 'BLA'
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part split partition p_default values (123)
2 into (partition p_123 compress for all operations, partition p_default);

Table altered.

SQL> select
2 partition_name
3 , compression
4 , compress_for
5 from
6 user_tab_partitions
7 where
8 table_name = 'T_PART';

PARTITION_NAME COMPRESS COMPRESS_FOR
------------------------------ -------- ------------
P_123 ENABLED OLTP
P_DEFAULT ENABLED BASIC

SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';
alter table t_part add test_col2 varchar2(10) default 'BLA'
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part drop unused columns;

Table altered.

SQL> alter table t_part drop column username;

Table altered.

SQL> alter table t_part move partition p_default compress for all operations;

Table altered.

SQL> select
2 partition_name
3 , compression
4 , compress_for
5 from
6 user_tab_partitions
7 where
8 table_name = 'T_PART';

PARTITION_NAME COMPRESS COMPRESS_FOR
------------------------------ -------- ------------
P_123 ENABLED OLTP
P_DEFAULT ENABLED OLTP

SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';

Table altered.

SQL>


When using HCC the most important difference is that it is not restricted to 255 columns - it successfully compresses segments with any number of columns (up to the hard limit of 1,000 columns).

HCC can only be used with direct-path operations, any conventional DML will decompress the affected blocks and use OLTP compression instead - which means that in case the segment has more than 255 columns the compression will effectively be disabled since any non-HCC compression supports only up to 255 columns.

HCC has only a few restrictions regarding schema evolution - the most noticeable one is that DROP COLUMN is always turned into a SET COLUMN UNUSED, so with HCC enabled you can't drop columns since DROP UNUSED COLUMNS is not supported and raises an error. This means that dropping / adding columns works towards the hard limit of 1,000 columns with HCC enabled.

Again this has the same side effect as described above - a simple CTAS operation to create a table to be exchanged with won't work any longer with unused columns.


SQL> create table t_part
2 compress for query high
3 --compress for all operations
4 partition by list (user_id) (
5 partition p_default values (default)
6 )
7 as
8 select * from all_users;

Table created.

SQL> alter table t_part set unused (created);

Table altered.

SQL> alter table t_part drop unused columns;
alter table t_part drop unused columns
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part drop column username;

Table altered.

SQL> alter table t_part add test_col varchar2(10);

Table altered.

SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';

Table altered.

SQL> alter table t_part split partition p_default values (123)
2 into (partition p_123 compress for all operations, partition p_default);

Table altered.

SQL> select
2 partition_name
3 , compression
4 , compress_for
5 from
6 user_tab_partitions
7 where
8 table_name = 'T_PART';

PARTITION_NAME COMPRESS COMPRESS_FOR
------------------------------ -------- ------------
P_123 ENABLED OLTP
P_DEFAULT ENABLED QUERY HIGH

SQL> alter table t_part add test_col3 varchar2(10) default 'BLA';

Table altered.

SQL> alter table t_part drop unused columns;
alter table t_part drop unused columns
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table t_part drop column username;
alter table t_part drop column username
*
ERROR at line 1:
ORA-00904: "USERNAME": invalid identifier

SQL> alter table t_part move partition p_default compress for all operations;

Table altered.

SQL> select
2 partition_name
3 , compression
4 , compress_for
5 from
6 user_tab_partitions
7 where
8 table_name = 'T_PART';

PARTITION_NAME COMPRESS COMPRESS_FOR
------------------------------ -------- ------------
P_123 ENABLED OLTP
P_DEFAULT ENABLED OLTP

SQL> alter table t_part add test_col4 varchar2(10) default 'BLA';

Table altered.

SQL>


And here is what happens if you're already hitting the 1,000 columns limit and attempt to use DROP [UNUSED] COLUMNS with HCC enabled:


SQL> alter table many_x_cols drop (col256);

Table altered.

SQL> alter table many_x_cols drop unused columns;
alter table many_x_cols drop unused columns
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> alter table many_x_cols add (col256 varchar2(10));
alter table many_x_cols add (col256 varchar2(10))
*
ERROR at line 1:
ORA-01792: maximum number of columns in a table or view is 1000

SQL> alter table many_x_cols move pctfree 0 nocompress;

Table altered.

SQL> alter table many_x_cols drop unused columns;

Table altered.

SQL> alter table many_x_cols add (col256 varchar2(10));

Table altered.

SQL>


As you can see dropping columns is only supported with HCC disabled which allows in this particular case to add another column after dropping the hidden/unused one.

- Index restrictions

* You cannot compress a table or a partition of table (or exchange a compressed partition into it) if there are usable bitmap indexes defined on the table due to the different Hakan factor that is reset when using by the compression - which means that with compressed segments more rows will fit into a single block leading to a different Hakan factor used for the bitmap indexes. The bitmap indexes need to be set unused (all partitions in case of a partitioned table/index) or dropped before enabling the compression and rebuild / recreated after compressing the heap segment. This is also officially documented. It is however interesting to note that this restriction is only enforced when switching from uncompressed to compressed segments / partitions. Changing the compression for example from basic compression to HCC compression is possible without hitting this restriction but due to the different compression levels in my opinion this potentially leads again to significantly different Hakan factors - so actively resetting the Hakan Factor using "ALTER TABLE ... MINIMIZE RECORDS_PER_BLOCK" and rebuilding the bitmap indexes might be in order when moving between different compression levels.


Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> alter table t_part move partition p_default nocompress;

Table altered.

SQL> alter table t_part move partition p_123 nocompress;

Table altered.

SQL> alter table t_part modify default attributes nocompress;

Table altered.

SQL> alter table t_part nocompress;

Table altered.

SQL> create bitmap index t_part_idx_bitmap on t_part (test_col) local;

Index created.

SQL> alter table t_part move partition p_default compress basic;
alter table t_part move partition p_default compress basic
*
ERROR at line 1:
ORA-14646: Specified alter table operation involving compression cannot be
performed in the presence of usable bitmap indexes

SQL> alter table t_part minimize records_per_block;
alter table t_part minimize records_per_block
*
ERROR at line 1:
ORA-28602: statement not permitted on tables containing bitmap indexes

SQL> alter index t_part_idx_bitmap modify partition p_default unusable;

Index altered.

SQL> alter table t_part move partition p_default compress basic;
alter table t_part move partition p_default compress basic
*
ERROR at line 1:
ORA-14646: Specified alter table operation involving compression cannot be
performed in the presence of usable bitmap indexes

SQL> alter table t_part minimize records_per_block;
alter table t_part minimize records_per_block
*
ERROR at line 1:
ORA-28602: statement not permitted on tables containing bitmap indexes

SQL> alter index t_part_idx_bitmap unusable;

Index altered.

SQL> alter table t_part move partition p_default compress basic;

Table altered.

SQL> alter table t_part minimize records_per_block;

Table altered.

SQL> alter index t_part_idx_bitmap rebuild partition p_default;

Index altered.

SQL> alter index t_part_idx_bitmap rebuild partition p_123;

Index altered.

SQL> alter table t_part move partition p_default nocompress update indexes;

Table altered.



As you can see switching on the compression is only allowed if all partitions of the bitmap index(es) are unusable - the opposite is not true: You can uncompress afterwards without any restrictions.

* This is related to B*Tree index compression and again more of theoretical nature but might become relevant when trying to introduce B*Tree index compression partition-wise via EXCHANGE PARTITION: You can't compress a single partition of a partitioned B*Tree index (or exchange a partition with a compressed index into the table) if the whole index has been not initially created with compression enabled. If you drop the index and recreate the whole index with compression enabled then you can turn the index compression individually on and off per index partition.

Very likely this is related to the restriction that the B*Tree index compression prefix length needs to be the same across all index partitions - this can only be defined on global level and I assume this is the reason why the index needs to be created compressed first. Since you usually want to have all partitions of an partitioned index to be compressed the same way this is probably a rather irrelevant restriction (except for the mentioned case regarding EXCHANGE PARTITION)


Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create index t_part_idx on t_part (user_id, test_col) nocompress local;

Index created.

SQL> alter index t_part_idx rebuild partition p_default compress;
alter index t_part_idx rebuild partition p_default compress
*
ERROR at line 1:
ORA-28659: COMPRESS must be specified at object level first

SQL> select
2 partition_name
3 , compression
4 from
5 user_ind_partitions
6 where
7 index_name = 'T_PART_IDX';

PARTITION_NAME COMPRESS
------------------------------ --------
P_123 DISABLED
P_DEFAULT DISABLED

SQL> drop index t_part_idx;

Index dropped.

SQL> create index t_part_idx on t_part (user_id, test_col) compress local;

Index created.

SQL> alter index t_part_idx rebuild partition p_default nocompress;

Index altered.

SQL> select
2 partition_name
3 , compression
4 from
5 user_ind_partitions
6 where
7 index_name = 'T_PART_IDX';

PARTITION_NAME COMPRESS
------------------------------ --------
P_123 ENABLED
P_DEFAULT DISABLED

SQL> drop index t_part_idx;

Index dropped.

SQL> create index t_part_idx on t_part (user_id, test_col) nocompress global
2 partition by range(user_id) (
3 partition p_default values less than (maxvalue)
4 );

Index created.

SQL> alter index t_part_idx rebuild partition p_default compress;
alter index t_part_idx rebuild partition p_default compress
*
ERROR at line 1:
ORA-28659: COMPRESS must be specified at object level first

SQL> drop index t_part_idx;

Index dropped.

SQL> create index t_part_idx on t_part (user_id, test_col) compress global
2 partition by range(user_id) (
3 partition p_default values less than (maxvalue)
4 );

Index created.

SQL> alter index t_part_idx rebuild partition p_default nocompress;

Index altered.

SQL> select
2 partition_name
3 , compression
4 from
5 user_ind_partitions
6 where
7 index_name = 'T_PART_IDX';

PARTITION_NAME COMPRESS
------------------------------ --------
P_DEFAULT DISABLED

SQL>


- Overcoming any of the schema evolution restrictions by temporarily uncompressing / disabling enabled compression


As already mentioned above there are different options how to overcome schema evolution restrictions. One obvious option, although very likely not applicable in many cases simply due to space and load constraints, is to temporarily uncompress the segments and to disable the compression. It is interesting to note that it is not as simple as it seems to undo compression with partitioned tables as I will demonstrate - I think I've even read somewhere that it is not possible at all to revert a table into a state where the mentioned restrictions no longer apply. This is definitely not true.

First of all it is important to note that in addition to the obvious decompression of each partition that contains compressed data the "COMPRESS" attribute that determines if a suitable operation will generate compressed data or not needs to be reset to NOCOMPRESS (but see below for "oddities"). So there is a significant difference between "ALTER ... [NO]COMPRESS" and "ALTER ... MOVE [NO]COMPRESS". The latter reorganizes the segment / partition and compresses/uncompresses the data while doing so, the former just "marks" the segment / partition as "enabled/disabled for compression" but doesn't touch the data and leaves it in whatever state it is at present. As as side note, the "COMPRESSION" column in the dictionary can not be used to distinguish between these two - you can have the column show "DISABLED" with data still compressed, and you can have the column show "ENABLED" with no compressed data and you can end up with a mixture of both - not every block of a segment is necessarily in a compressed state.

Furthermore the "COMPRESSION" attribute can be defined on multiple levels with partitioned tables:

- On the global TABLE level: ALTER TABLE ... [NO]COMPRESS

- On the partition level as default attributes: ALTER TABLE ... MODIFY DEFAULT ATTRIBUTES [NO]COMPRESS

- On the partition level to mark for [NO]COMPRESSION: ALTER TABLE ... MODIFY PARTITION [NO]COMPRESS

- On the partition level with MOVE: ALTER TABLE ... MOVE PARTITION [NO]COMPRESS ... [UPDATE [GLOBAL] INDEXES]

Now in order to be able to overcome the schema evolution restrictions all levels except the partition default attribute level need to be addressed: Any compressed data needs to be uncompressed, but also any partition that has been marked for compression (but doesn't necessarily contain compressed data) needs to be modified as well - finally the global table level also needs to be reset.

The setting on global table level seems to be the crucial step. It looks like Oracle checks during this operation (ALTER TABLE ... NOCOMPRESS) if all subordinate (sub-)partitions have a flag set that marks them as uncompressed. Only if that is the case some internal flag on global table level can also be reset so that the restrictions no longer apply.

There are however some oddities that need to be considered when doing so:

- You need to "move" any partition (ALTER TABLE ... MOVE PARTITION NOCOMPRESS) that has marked for compression if any other partition actually held compressed data - simply "unmarking" it using "ALTER TABLE ... MODIFY PARTITION NOCOMPRESS" is not sufficient even it does not hold any compressed data (for example, has only be marked with "ALTER TABLE ... MODIFY PARTITION COMPRESS").

- If you have BITMAP INDEXES defined on the table with compressed partitions you need to either set the whole indexes unusable or drop them before the COMPRESSION can be effectively disabled on global table level using ALTER TABLE ... NOCOMPRESS

See the following test case for all steps to unwind this:


Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create table t_part
2 nocompress
3 partition by list (user_id) (
4 partition p_default values (default)
5 )
6 as
7 select * from all_users;

Table created.

SQL> alter table t_part split partition p_default values (123)
2 into (partition p_123, partition p_default);

Table altered.

SQL> -- Compress one partition
SQL> alter table t_part move partition p_default compress;

Table altered.

SQL> -- And uncompress it again
SQL> alter table t_part move partition p_default nocompress;

Table altered.

SQL> -- Doesn't work!
SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';
alter table t_part add test_col2 varchar2(10) default 'BLA'
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> -- Compress one partition
SQL> alter table t_part move partition p_default compress;

Table altered.

SQL> -- And uncompress it again
SQL> alter table t_part move partition p_default nocompress;

Table altered.

SQL> -- The global table level was missing!
SQL> alter table t_part nocompress;

Table altered.

SQL> -- Now it works
SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';

Table altered.

SQL> -- Start again
SQL> alter table t_part drop column test_col2;

Table altered.

SQL> -- Compress one partition
SQL> alter table t_part move partition p_default compress;

Table altered.

SQL> -- Mark in addition another partition for compression (note: no data gets compressed!)
SQL> alter table t_part modify partition p_123 compress;

Table altered.

SQL> -- And uncompress the compressed partition again
SQL> alter table t_part move partition p_default nocompress;

Table altered.

SQL> -- Unmark the other partition marked for compression (note: no data was compressed!)
SQL> alter table t_part modify partition p_123 nocompress;

Table altered.

SQL> -- The global table level
SQL> alter table t_part nocompress;

Table altered.

SQL> -- But: Doesn't work!
SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';
alter table t_part add test_col2 varchar2(10) default 'BLA'
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> -- Oddity 1: Although it doesn't contain compressed data
SQL> -- this partition needs to be reorganized as well
SQL> alter table t_part move partition p_123 nocompress;

Table altered.

SQL> -- Don't forget the global table level, otherwise below doesn't work
SQL> alter table t_part nocompress;

Table altered.

SQL> -- Now it works
SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';

Table altered.

SQL> -- Start again
SQL> alter table t_part drop column test_col2;

Table altered.

SQL> -- Compress one partition
SQL> alter table t_part move partition p_default compress;

Table altered.

SQL> -- Add a bitmap index
SQL> create bitmap index t_part_idx_bitmap on t_part (username) local;

Index created.

SQL> -- And uncompress the compressed partition again
SQL> alter table t_part move partition p_default nocompress update indexes;

Table altered.

SQL> -- The global table level
SQL> alter table t_part nocompress;

Table altered.

SQL> -- But: Doesn't work!
SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';
alter table t_part add test_col2 varchar2(10) default 'BLA'
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> -- Oddity 2: Bitmap indexes on compressed tables need to be set unusable or dropped
SQL> -- Note it is not sufficient to set single partitions unusable
SQL> alter index t_part_idx_bitmap unusable;

Index altered.

SQL> -- But still: Doesn't work because the global level has not been touched again
SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';
alter table t_part add test_col2 varchar2(10) default 'BLA'
*
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL> -- The global table level
SQL> alter table t_part nocompress;

Table altered.

SQL> -- Now it works
SQL> alter table t_part add test_col2 varchar2(10) default 'BLA';

Table altered.

SQL>