Wednesday, May 22, 2019

Indexing Null Values - Part 2

In the previous post I've demonstrated that Oracle has some problems to make efficient use of B*Tree indexes if an IS NULL condition is followed by IN / OR predicates also covered by the same index - the predicates following are not used to navigate the index structure efficiently but are applied as filters on all index entries identified by the IS NULL.

In this part I'll show what results I got when repeating the same exercise using Bitmap indexes - after all they include NULL values anyway, so no special tricks are required to use them for an IS NULL search. Let's start again with the same data set (actually not exactly the same but very similar) and an index on the single expression that gets searched for via IS NULL - results are again from 18.3.0:

SQL> create table null_index as select * from dba_tables;

Table created.

SQL> insert /*+ append */ into null_index select a.* from null_index a, (select /*+ no_merge cardinality(100) */ rownum as dup from dual connect by level <= 100);

214500 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(null, 'NULL_INDEX', method_opt => 'for all columns size auto for columns size 254 owner')

PL/SQL procedure successfully completed.

SQL> create bitmap index null_index_idx on null_index (pct_free);

Index created.

SQL> set serveroutput off pagesize 5000 arraysize 500
SQL>
SQL> set autotrace traceonly
SQL>
SQL> select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

101 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1297049223

------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                |    19 |  5852 |  2342   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX     |    19 |  5852 |  2342   (1)| 00:00:01 |
|   2 |   BITMAP CONVERSION TO ROWIDS       |                |       |       |            |          |
|*  3 |    BITMAP INDEX SINGLE VALUE        | NULL_INDEX_IDX |       |       |            |          |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')
   3 - access("PCT_FREE" IS NULL)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       2192  consistent gets
         30  physical reads
          0  redo size
       7199  bytes sent via SQL*Net to client
        372  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        101  rows processed

So indeed the Bitmap index was successfully used to identify the PCT_FREE IS NULL rows but the efficiency suffers from the same problem and to the same degree as the corresponding B*Tree index plan - too many rows have to be filtered on table level:

Plan hash value: 1297049223

--------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name           | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
--------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                |      1 |        |  2342 (100)|    101 |00:00:00.01 |    2192 |     30 |
|*  1 |  TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX     |      1 |     19 |  2342   (1)|    101 |00:00:00.01 |    2192 |     30 |
|   2 |   BITMAP CONVERSION TO ROWIDS       |                |      1 |        |            |  13433 |00:00:00.01 |       3 |     30 |
|*  3 |    BITMAP INDEX SINGLE VALUE        | NULL_INDEX_IDX |      1 |        |            |      3 |00:00:00.01 |       3 |     30 |
--------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST'))
   3 - access("PCT_FREE" IS NULL)

Some interesting points to mention: The 13,000+ rows are identified in the Bitmap index using just three index row entries / bitmap fragments, so that's the special efficiency of Bitmap indexes where a single index row entry can cover many, many table rows, and it's also interesting to see that the costing is pretty different from the B*Tree index costing (2342 vs. 1028, in this case closer to reality of 2,200 consistent gets but we'll see in a moment how this can change) - and no cardinality estimate gets mentioned on Bitmap index level  - the B*Tree index plan showed the spot on 13,433 estimated rows.

So reproducing the B*Tree test case, let's add the OWNER column to the Bitmap index in an attempt to increase the efficiency. Note that I drop the previous index to prevent Oracle from a "proper" usage of Bitmap indexes, as we'll see in a moment:

SQL> drop index null_index_idx;

Index dropped.

SQL> create bitmap index null_index_idx2 on null_index (pct_free, owner);

Index created.

SQL> select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

101 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1751956722

-------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                 |    19 |  5852 |  2343   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX      |    19 |  5852 |  2343   (1)| 00:00:01 |
|   2 |   BITMAP CONVERSION TO ROWIDS       |                 |       |       |            |          |
|*  3 |    BITMAP INDEX RANGE SCAN          | NULL_INDEX_IDX2 |       |       |            |          |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')
   3 - access("PCT_FREE" IS NULL)
       filter("PCT_FREE" IS NULL AND ("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST'))


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        105  consistent gets
         30  physical reads
          0  redo size
       7199  bytes sent via SQL*Net to client
        372  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        101  rows processed

Plan hash value: 1751956722

---------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name            | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
---------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                 |      1 |        |  2343 (100)|    101 |00:00:00.01 |     105 |     30 |
|*  1 |  TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX      |      1 |     19 |  2343   (1)|    101 |00:00:00.01 |     105 |     30 |
|   2 |   BITMAP CONVERSION TO ROWIDS       |                 |      1 |        |            |    101 |00:00:00.01 |       4 |     30 |
|*  3 |    BITMAP INDEX RANGE SCAN          | NULL_INDEX_IDX2 |      1 |        |            |      1 |00:00:00.01 |       4 |     30 |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST'))
   3 - access("PCT_FREE" IS NULL)
       filter(("PCT_FREE" IS NULL AND ("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')))

So now we end up with an "Bitmap index range scan" operation, which in reality looks pretty efficient - just 105 consistent gets, so assuming 101 consistent gets for accessing the 101 table rows it just required 4 consistent gets on index level. But then look at the cost estimate: 2343, which is even greater than the cost estimate of the previous plan, and also check the "Predicate Information" section, which looks pretty weird, too - an access only for PCT_FREE IS NULL, a filter on index level repeating the whole predicates including the PCT_FREE IS NULL and most significantly the predicates on OWNER repeated on table level.

Clearly what the optimizer assumes in terms of costing and predicates required doesn't correspond to what happens at runtime, which looks pretty efficient, but at least according the predicates on index level again doesn't look like the optimal strategy we would like to see again: Why the additional filter instead of just access? We can also see that echoed in the Rowsource statistics: Only a single Bitmap index fragment gets produced by the "Bitmap index range scan" but it requires 4 consistent gets on index level, so three of them get "filtered" after access.

The costing seems to assume that only the PCT_FREE IS NULL rows are identified on index level, which clearly isn't the case at runtime...

Of course this is not proper usage of Bitmap indexes - typically you don't create a multi column Bitmap index but instead make use of the real power of Bitmap indexes, which is how Oracle can combine multiple of them for efficient usage and access.

Before doing so, let's just for the sake of completeness repeat the combined Bitmap index of the B*Tree variant that turned out to be most efficient for the B*Tree case:

SQL> drop index null_index_idx2;

Index dropped.

SQL> create bitmap index null_index_idx3 on null_index (owner, pct_free);

Index created.

SQL> select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

101 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1022155563

--------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |                 |    19 |  5852 |    83   (0)| 00:00:01 |
|   1 |  INLIST ITERATOR                     |                 |       |       |            |          |
|*  2 |   TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX      |    19 |  5852 |    83   (0)| 00:00:01 |
|   3 |    BITMAP CONVERSION TO ROWIDS       |                 |       |       |            |          |
|*  4 |     BITMAP INDEX RANGE SCAN          | NULL_INDEX_IDX3 |       |       |            |          |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("PCT_FREE" IS NULL)
   4 - access("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')
       filter("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        207  consistent gets
         30  physical reads
          0  redo size
       7199  bytes sent via SQL*Net to client
        372  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        101  rows processed

Plan hash value: 1022155563

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name            | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |                 |      1 |        |    83 (100)|    101 |00:00:00.01 |     207 |     30 |
|   1 |  INLIST ITERATOR                     |                 |      1 |        |            |    101 |00:00:00.01 |     207 |     30 |
|*  2 |   TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX      |      2 |     19 |    83   (0)|    101 |00:00:00.01 |     207 |     30 |
|   3 |    BITMAP CONVERSION TO ROWIDS       |                 |      2 |        |            |    303 |00:00:00.01 |       5 |     30 |
|*  4 |     BITMAP INDEX RANGE SCAN          | NULL_INDEX_IDX3 |      2 |        |            |      2 |00:00:00.01 |       5 |     30 |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("PCT_FREE" IS NULL)
   4 - access(("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST'))
       filter(("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST'))

While we see now again the desired "INLIST ITERATOR" this one looks weird for several reasons, in particular because we now have a much lower cost estimate (83) but in reality it is less efficient than the previous one (cost estimate 2343 but 105 consistent gets) due to the 207 consistent gets required. Why is this so? The "Predicate Information" section shows why: Only the predicate on OWNER is evaluated on index level (303 rows identified on index level) and therefore rows need to be filtered on table level, which looks again like an implementation limitation and pretty unnecessary - after all the PCT_FREE IS NULL should be somehow treated on index level instead.

So finally let's see how things turn out when using Bitmap indexes the way they are designed - by creating multiple ones and let Oracle combine them:

SQL> create bitmap index null_index_idx on null_index (pct_free);

Index created.

SQL> create bitmap index null_index_idx4 on null_index (owner);

Index created.

SQL> select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

101 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 704944303

-------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                 |    19 |  5852 |     8   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX      |    19 |  5852 |     8   (0)| 00:00:01 |
|   2 |   BITMAP CONVERSION TO ROWIDS       |                 |       |       |            |          |
|   3 |    BITMAP AND                       |                 |       |       |            |          |
|*  4 |     BITMAP INDEX SINGLE VALUE       | NULL_INDEX_IDX  |       |       |            |          |
|   5 |     BITMAP OR                       |                 |       |       |            |          |
|*  6 |      BITMAP INDEX SINGLE VALUE      | NULL_INDEX_IDX4 |       |       |            |          |
|*  7 |      BITMAP INDEX SINGLE VALUE      | NULL_INDEX_IDX4 |       |       |            |          |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("PCT_FREE" IS NULL)
   6 - access("OWNER"='AUDSYS')
   7 - access("OWNER"='CBO_TEST')


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        108  consistent gets
         60  physical reads
          0  redo size
      33646  bytes sent via SQL*Net to client
        372  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        101  rows processed

Plan hash value: 704944303

---------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name            | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
---------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                 |      1 |        |     8 (100)|    101 |00:00:00.01 |     108 |     60 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX      |      1 |     19 |     8   (0)|    101 |00:00:00.01 |     108 |     60 |
|   2 |   BITMAP CONVERSION TO ROWIDS       |                 |      1 |        |            |    101 |00:00:00.01 |       7 |     60 |
|   3 |    BITMAP AND                       |                 |      1 |        |            |      1 |00:00:00.01 |       7 |     60 |
|*  4 |     BITMAP INDEX SINGLE VALUE       | NULL_INDEX_IDX  |      1 |        |            |      3 |00:00:00.01 |       3 |     30 |
|   5 |     BITMAP OR                       |                 |      1 |        |            |      1 |00:00:00.01 |       4 |     30 |
|*  6 |      BITMAP INDEX SINGLE VALUE      | NULL_INDEX_IDX4 |      1 |        |            |      1 |00:00:00.01 |       2 |     30 |
|*  7 |      BITMAP INDEX SINGLE VALUE      | NULL_INDEX_IDX4 |      1 |        |            |      1 |00:00:00.01 |       2 |      0 |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("PCT_FREE" IS NULL)
   6 - access("OWNER"='AUDSYS')
   7 - access("OWNER"='CBO_TEST')

So now we see access predicates only and Oracle making efficient use by combining multiple Bitmap indexes. Nevertheless I find the range of costing amazing: This plan is assigned a cost of 8 but it's actually less efficient at runtime (108 consistent gets) than the plan above having a cost of 2343 assigned but requiring just 105 consistent gets at runtime. Clearly the costing of Bitmap indexes is still - even in version 18.3 - full of surprises.

Summary

Repeating the same exercise as previously using Bitmap indexes shows several things:

- Oracle isn't necessarily good at costing and using multi column Bitmap indexes properly
- The costing of Bitmap indexes is still questionable (the most important figure "Clustering Factor" is still meaningless for Bitmap indexes)
- For proper handling use Bitmap indexes the way they are supposed to be used: By creating separate ones and let Oracle combine them

Thursday, May 16, 2019

Indexing Null Values - Part 1

Indexing null values in Oracle is something that has been written about a lot in the past already. Nowadays it should be common knowledge that Oracle B*Tree indexes don't index entries that are entirely null, but it's possible to include null values in B*Tree indexes when combining them with something guaranteed to be non-null, be it another column or simply a constant expression.

Jonathan Lewis not too long ago published a note that showed an oddity when dealing with IS NULL predicates that in the end turned out not to be a real threat and looked more like an oddity how Oracle displays the access and filter predicates when accessing an index and using IS NULL together with other predicates following after.

However, I've recently come across a rather similar case where this display oddity turns into a real threat. To get things started, let's have a look at the following (this is from 18.3.0, but other recent versions should show similar results):

SQL> create table null_index as select * from dba_tables;

Table created.

SQL> insert /*+ append */ into null_index select a.* from null_index a, (select /*+ no_merge cardinality(100) */ rownum as dup from dual connect by level <= 100);

214700 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(null, 'NULL_INDEX', method_opt => 'for all columns size auto for columns size 254 owner')

PL/SQL procedure successfully completed.

SQL> create index null_index_idx on null_index (pct_free, ' ');

Index created.

SQL> set serveroutput off pagesize 5000 arraysize 500

Session altered.

SQL> set autotrace traceonly
SQL>
SQL> select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

101 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 3608178030

------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                |    19 |  5852 |  1028   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX     |    19 |  5852 |  1028   (1)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | NULL_INDEX_IDX | 13433 |       |    32   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')
   2 - access("PCT_FREE" IS NULL)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       2178  consistent gets
         35  physical reads
          0  redo size
       7199  bytes sent via SQL*Net to client
        372  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        101  rows processed

So this is the known approach of indexing null values by simply adding a constant expression and we can see from the execution plan that indeed the index was used to identify the rows having NULLs.

But we can also see from the execution plan, the number of consistent gets and also the Rowsource Statistics that this access can surely be further improved:

Plan hash value: 3608178030

--------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name           | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
--------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                |      1 |        |  1028 (100)|    101 |00:00:00.01 |    2178 |     35 |
|*  1 |  TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX     |      1 |     19 |  1028   (1)|    101 |00:00:00.01 |    2178 |     35 |
|*  2 |   INDEX RANGE SCAN                  | NULL_INDEX_IDX |      1 |  13433 |    32   (0)|  13433 |00:00:00.01 |      30 |     35 |
--------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST'))
   2 - access("PCT_FREE" IS NULL)

Because the additional predicate on OWNER can only be applied on table level, we first identify more than 13,000 rows on index level, visit all those table rows via random access and apply the filter to end up with the final 101 rows.

So obviously we should add OWNER to the index to avoid visiting that many table rows:

SQL> create index null_index_idx2 on null_index (pct_free, owner);

Index created.

SQL> select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

101 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 3808602675

-------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                 |    19 |  5852 |    40   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX      |    19 |  5852 |    40   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | NULL_INDEX_IDX2 |    19 |       |    38   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("PCT_FREE" IS NULL)
       filter("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        137  consistent gets
         61  physical reads
          0  redo size
      33646  bytes sent via SQL*Net to client
        372  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        101  rows processed

Plan hash value: 3808602675

---------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name            | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
---------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                 |      1 |        |    40 (100)|    101 |00:00:00.01 |     137 |     61 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX      |      1 |     19 |    40   (0)|    101 |00:00:00.01 |     137 |     61 |
|*  2 |   INDEX RANGE SCAN                  | NULL_INDEX_IDX2 |      1 |     19 |    38   (0)|    101 |00:00:00.01 |      36 |     61 |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("PCT_FREE" IS NULL)
       filter(("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST'))

So at first sight this looks indeed like an improvement, and it is compared to the previous execution plan, see for example how the number of consistent gets has been reduced. However, there is something odd going on: The index cost part is even greater than in the previous example, and looking more closely at the predicate information section it becomes obvious that the additional predicate on OWNER isn't applied as access predicate to the index, but only as filter. This means rather than directly identifying the relevant parts of the index by navigating the index structure efficiently using both predicates, only the PCT_FREE IS NULL expression gets used to identify the more than 13,000 corresponding index entries and then applying the filter on OWNER afterwards. While this is better than applying the filter on table level, it still can become a very costly operation and the question here is, why doesn't Oracle use both expressions to access the index? The answer to me looks like an implementation restriction - I don't see any technical reason why Oracle shouldn't be capable of doing so. Currently it looks like that in this particular case when using an IN predicate or the equivalent OR predicates following an IS NULL on index level gets only applied as filter, similar to predicates following range or unequal comparisons, or skipping columns / expressions in a composite index. But for those cases there is a reason why Oracle does so - it no longer can use the sorted index entries for efficient access, but I don't see why this should apply to this IS NULL case - and Jonathan's note above shows that in principle for other kinds of predicates it works as expected (except the oddity discussed).

This example highlights another oddity: Since it contains an IN list, ideally we would like to see an INLIST ITERATOR used as part of the execution plan, but there is only an INDEX RANGE SCAN operation using this FILTER expression.

By changing the order of the index expressions and having the expression used for the IS NULL predicate as trailing one, we can see the following:

SQL> create index null_index_idx3 on null_index (owner, pct_free);

Index created.

SQL> select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

101 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 2178707950

--------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |                 |    19 |  5852 |     6   (0)| 00:00:01 |
|   1 |  INLIST ITERATOR                     |                 |       |       |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX      |    19 |  5852 |     6   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | NULL_INDEX_IDX3 |    19 |       |     4   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access(("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST') AND "PCT_FREE" IS NULL)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        108  consistent gets
         31  physical reads
          0  redo size
      33646  bytes sent via SQL*Net to client
        372  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        101  rows processed

Plan hash value: 2178707950

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name            | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |                 |      1 |        |     6 (100)|    101 |00:00:00.01 |     108 |     31 |
|   1 |  INLIST ITERATOR                     |                 |      1 |        |            |    101 |00:00:00.01 |     108 |     31 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX      |      2 |     19 |     6   (0)|    101 |00:00:00.01 |     108 |     31 |
|*  3 |    INDEX RANGE SCAN                  | NULL_INDEX_IDX3 |      2 |     19 |     4   (0)|    101 |00:00:00.01 |       7 |     31 |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access((("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')) AND "PCT_FREE" IS NULL)

So this is the expected execution plan, including an INLIST ITERATOR and showing that all predicate expressions get used to access the index efficiently, reducing the number of consistent gets further. Of course, a potential downside here is that this index might not be appropriate if queries are looking for PCT_FREE IS NULL only.

Summary

It looks like that IN / OR predicates following an IS NULL comparison on index level are only applied as filters and therefore also prevent other efficient operations like inlist iterators. The problem in principle can be worked around by putting the IS NULL expression at the end of a composite index, but that could come at the price of requiring an additional index on the IS NULL expression when there might be the need for searching just for that expression efficiently.

In part 2 for curiosity I'll have a look at what happens when applying the same to Bitmap indexes, which include NULL values anyway...

Script used:

set echo on

drop table null_index purge;

create table null_index as select * from dba_tables;

insert /*+ append */ into null_index select a.* from null_index a, (select /*+ no_merge cardinality(100) */ rownum as dup from dual connect by level <= 100);

commit;

exec dbms_stats.gather_table_stats(null, 'NULL_INDEX', method_opt => 'for all columns size auto for columns size 254 owner')

create index null_index_idx on null_index (pct_free, ' ');

set serveroutput off pagesize 5000 arraysize 500

set autotrace traceonly

select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

create index null_index_idx2 on null_index (pct_free, owner);

select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

create index null_index_idx3 on null_index (owner, pct_free);

select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

Wednesday, May 8, 2019

Compression Restrictions - Update: Wide Table Compression In 12.1 And Later

I do have a very old post that used to be rather popular here that listed various restrictions related to compression. One of the most obvious restrictions in older versions was that the basic / OLTP (Advanced Row) heap table compression based on symbol tables / de-duplication was limited to tables with 254 columns or less - or probably more general to rows with single row pieces.

This limitation was lifted with Oracle 12c which started to support this compression type with tables having more than 254 columns / multiple row pieces - and there is a new internal parameter called "_widetab_comp_enabled" that defaults to TRUE.

So this is nothing really new, 12.1 has been released a long time ago, but it looks like it wasn't mentioned in the official 12c "New Features Guide", but only in some other documents as a side note - although it could be an important change for users not allowed to make use of HCC compression that have such "wide" tables and would like to save space resp. improve I/O performance.

Now the odd thing is that Oracle obviously found some problems with this change in 12.1.0.2 that seems to revolve around redo apply and hence decided to disable this feature partially in later PSUs of 12.1.0.2. There are a number of related notes on MyOracleSupport, specifically:

All About Advanced Table Compression (Overview, Usage, Examples, Restrictions) (Doc ID 882712.1)

Advanced Compression Restriction on 255 Columns Lifted For Oracle 12c (Doc ID 1612095.1)

Bug 25498473 : OLTP COMPRESSION DOESN'T WORK IN 12.1.0.2.160719 AND HIGHER WITH > 255 COLUMNS

So the summary of the situation seems to be this:

- In 12.1.0.2 with current PSUs applied you only get compression of tables with multiple row pieces when using Advanced Compression and conventional DML. Direct Path Load / CTAS / ALTER TABLE MOVE reorganisation does not compress hence basic compression does not work for such "wide" tables

- Starting with Oracle 12.2 this (artificial) limitation is lifted again, and now the Direct Path Load / CTAS / ALTER TABLE MOVE reorganisation code paths support compression again, which seems to include basic compression, although above mentioned documents only refer to OLTP (Advanced Row) compression

Note that the code obviously - according to the mentioned documents - checks the COMPATIBLE setting of the database, so running 12.2+ with compatible set to 12.1 means behaviour like 12.1, which means no basic compression for "wide" tables.

So depending on the patch level of the database a script populating a 1,000 columns table with 10,000 rows of very repeatable content that you can find at the end of this post produces the following output in 12.1.0.2:

Unpatched 12.1.0.2:

SQL> create table many_cols (
  2  col001 varchar2(10),
  3  col002 varchar2(10),
  4  col003 varchar2(10),
.
.
.
1000  col999 varchar2(10),
1001  col1000 varchar2(10)
1002  );

Table created.

SQL>
SQL> declare
  2    s_sql1 varchar2(32767);
  3    s_sql2 varchar2(32767);
  4  begin
  5    for i in 1..1000 loop
  6      s_sql1 := s_sql1 || 'col' || to_char(i, 'FM9000') || ',';
  7      s_sql2 := s_sql2 || '''BLABLA'',';
  8    end loop;
  9
 10    s_sql1 := rtrim(s_sql1, ',');
 11    s_sql2 := rtrim(s_sql2, ',');
 12
 13    s_sql1 := 'insert into many_cols (' || s_sql1 || ') values (' || s_sql2 || ')';
 14
 15    for i in 1..10000 loop
 16      execute immediate s_sql1;
 17    end loop;
 18
 19    commit;
 20  end;
 21  /

PL/SQL procedure successfully completed.

SQL> select count(*) from many_cols;

  COUNT(*)
----------
     10000

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

    BLOCKS
----------
     10003

SQL> alter table many_cols move compress basic;

Table altered.

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

    BLOCKS
----------
        55

12.1.0.2 with some recent PSU applied:

SQL> create table many_cols (
  2  col001 varchar2(10),
  3  col002 varchar2(10),
  4  col003 varchar2(10),
.
.
.
1000  col999 varchar2(10),
1001  col1000 varchar2(10)
1002  );

Table created.

SQL>
SQL> declare
  2    s_sql1 varchar2(32767);
  3    s_sql2 varchar2(32767);
  4  begin
  5    for i in 1..1000 loop
  6      s_sql1 := s_sql1 || 'col' || to_char(i, 'FM9000') || ',';
  7      s_sql2 := s_sql2 || '''BLABLA'',';
  8    end loop;
  9
 10    s_sql1 := rtrim(s_sql1, ',');
 11    s_sql2 := rtrim(s_sql2, ',');
 12
 13    s_sql1 := 'insert into many_cols (' || s_sql1 || ') values (' || s_sql2 || ')';
 14
 15    for i in 1..10000 loop
 16      execute immediate s_sql1;
 17    end loop;
 18
 19    commit;
 20  end;
 21  /

PL/SQL procedure successfully completed.

SQL> select count(*) from many_cols;

  COUNT(*)
----------
     10000

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

    BLOCKS
----------
     10097

SQL> alter table many_cols move compress basic;

Table altered.

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

    BLOCKS
----------
     10143

12.2 and later produce in principle again the same output as the unpatched 12.1.0.2, here as an example 18.3 (18.4.1 same behaviour):

SQL> create table many_cols (
  2  col001 varchar2(10),
  3  col002 varchar2(10),
  4  col003 varchar2(10),
.
.
.
1000  col999 varchar2(10),
1001  col1000 varchar2(10)
1002  );

Table created.

SQL>
SQL> declare
  2    s_sql1 varchar2(32767);
  3    s_sql2 varchar2(32767);
  4  begin
  5    for i in 1..1000 loop
  6      s_sql1 := s_sql1 || 'col' || to_char(i, 'FM9000') || ',';
  7      s_sql2 := s_sql2 || '''BLABLA'',';
  8    end loop;
  9
 10    s_sql1 := rtrim(s_sql1, ',');
 11    s_sql2 := rtrim(s_sql2, ',');
 12
 13    s_sql1 := 'insert into many_cols (' || s_sql1 || ') values (' || s_sql2 || ')';
 14
 15    for i in 1..10000 loop
 16      execute immediate s_sql1;
 17    end loop;
 18
 19    commit;
 20  end;
 21  /

PL/SQL procedure successfully completed.

SQL> select count(*) from many_cols;

  COUNT(*)
----------
     10000

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

    BLOCKS
----------
     10003

SQL> alter table many_cols move compress basic;

Table altered.

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

    BLOCKS
----------
        55

If you wanted to make use of "wide" table compression in 12.1.0.2 with PSU applied, then the script would have to be changed to enable OLTP / Advanced Row compression before the initial population and then you'll get this output:

SQL> create table many_cols (
  2  col001 varchar2(10),
  3  col002 varchar2(10),
  4  col003 varchar2(10),
.
.
.
1000  col999 varchar2(10),
1001  col1000 varchar2(10)
1002  )
1003  compress for oltp
1004  ;

Table created.

SQL>
SQL> declare
  2    s_sql1 varchar2(32767);
  3    s_sql2 varchar2(32767);
  4  begin
  5    for i in 1..1000 loop
  6      s_sql1 := s_sql1 || 'col' || to_char(i, 'FM9000') || ',';
  7      s_sql2 := s_sql2 || '''BLABLA'',';
  8    end loop;
  9
 10    s_sql1 := rtrim(s_sql1, ',');
 11    s_sql2 := rtrim(s_sql2, ',');
 12
 13    s_sql1 := 'insert into many_cols (' || s_sql1 || ') values (' || s_sql2 || ')';
 14
 15    for i in 1..10000 loop
 16      execute immediate s_sql1;
 17    end loop;
 18
 19    commit;
 20  end;
 21  /

PL/SQL procedure successfully completed.

SQL> select count(*) from many_cols;

  COUNT(*)
----------
     10000

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

    BLOCKS
----------
        88

SQL> alter table many_cols move compress basic;

Table altered.

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

    BLOCKS
----------
     10143

which confirms that the "wide" table compression works for conventional inserts, but it requires the additional "Advanced Compression" license. Another downside of this approach is a potentially huge impact on performance and undo / redo generation due to the repeated block compression with each compression operation dumping the whole before image block to undo / redo.

Script used:

set echo on

drop table many_cols purge;

create table many_cols (
col001 varchar2(10),
col002 varchar2(10),
col003 varchar2(10),
col004 varchar2(10),
col005 varchar2(10),
col006 varchar2(10),
col007 varchar2(10),
col008 varchar2(10),
col009 varchar2(10),
col010 varchar2(10),
col011 varchar2(10),
col012 varchar2(10),
col013 varchar2(10),
col014 varchar2(10),
col015 varchar2(10),
col016 varchar2(10),
col017 varchar2(10),
col018 varchar2(10),
col019 varchar2(10),
col020 varchar2(10),
col021 varchar2(10),
col022 varchar2(10),
col023 varchar2(10),
col024 varchar2(10),
col025 varchar2(10),
col026 varchar2(10),
col027 varchar2(10),
col028 varchar2(10),
col029 varchar2(10),
col030 varchar2(10),
col031 varchar2(10),
col032 varchar2(10),
col033 varchar2(10),
col034 varchar2(10),
col035 varchar2(10),
col036 varchar2(10),
col037 varchar2(10),
col038 varchar2(10),
col039 varchar2(10),
col040 varchar2(10),
col041 varchar2(10),
col042 varchar2(10),
col043 varchar2(10),
col044 varchar2(10),
col045 varchar2(10),
col046 varchar2(10),
col047 varchar2(10),
col048 varchar2(10),
col049 varchar2(10),
col050 varchar2(10),
col051 varchar2(10),
col052 varchar2(10),
col053 varchar2(10),
col054 varchar2(10),
col055 varchar2(10),
col056 varchar2(10),
col057 varchar2(10),
col058 varchar2(10),
col059 varchar2(10),
col060 varchar2(10),
col061 varchar2(10),
col062 varchar2(10),
col063 varchar2(10),
col064 varchar2(10),
col065 varchar2(10),
col066 varchar2(10),
col067 varchar2(10),
col068 varchar2(10),
col069 varchar2(10),
col070 varchar2(10),
col071 varchar2(10),
col072 varchar2(10),
col073 varchar2(10),
col074 varchar2(10),
col075 varchar2(10),
col076 varchar2(10),
col077 varchar2(10),
col078 varchar2(10),
col079 varchar2(10),
col080 varchar2(10),
col081 varchar2(10),
col082 varchar2(10),
col083 varchar2(10),
col084 varchar2(10),
col085 varchar2(10),
col086 varchar2(10),
col087 varchar2(10),
col088 varchar2(10),
col089 varchar2(10),
col090 varchar2(10),
col091 varchar2(10),
col092 varchar2(10),
col093 varchar2(10),
col094 varchar2(10),
col095 varchar2(10),
col096 varchar2(10),
col097 varchar2(10),
col098 varchar2(10),
col099 varchar2(10),
col100 varchar2(10),
col101 varchar2(10),
col102 varchar2(10),
col103 varchar2(10),
col104 varchar2(10),
col105 varchar2(10),
col106 varchar2(10),
col107 varchar2(10),
col108 varchar2(10),
col109 varchar2(10),
col110 varchar2(10),
col111 varchar2(10),
col112 varchar2(10),
col113 varchar2(10),
col114 varchar2(10),
col115 varchar2(10),
col116 varchar2(10),
col117 varchar2(10),
col118 varchar2(10),
col119 varchar2(10),
col120 varchar2(10),
col121 varchar2(10),
col122 varchar2(10),
col123 varchar2(10),
col124 varchar2(10),
col125 varchar2(10),
col126 varchar2(10),
col127 varchar2(10),
col128 varchar2(10),
col129 varchar2(10),
col130 varchar2(10),
col131 varchar2(10),
col132 varchar2(10),
col133 varchar2(10),
col134 varchar2(10),
col135 varchar2(10),
col136 varchar2(10),
col137 varchar2(10),
col138 varchar2(10),
col139 varchar2(10),
col140 varchar2(10),
col141 varchar2(10),
col142 varchar2(10),
col143 varchar2(10),
col144 varchar2(10),
col145 varchar2(10),
col146 varchar2(10),
col147 varchar2(10),
col148 varchar2(10),
col149 varchar2(10),
col150 varchar2(10),
col151 varchar2(10),
col152 varchar2(10),
col153 varchar2(10),
col154 varchar2(10),
col155 varchar2(10),
col156 varchar2(10),
col157 varchar2(10),
col158 varchar2(10),
col159 varchar2(10),
col160 varchar2(10),
col161 varchar2(10),
col162 varchar2(10),
col163 varchar2(10),
col164 varchar2(10),
col165 varchar2(10),
col166 varchar2(10),
col167 varchar2(10),
col168 varchar2(10),
col169 varchar2(10),
col170 varchar2(10),
col171 varchar2(10),
col172 varchar2(10),
col173 varchar2(10),
col174 varchar2(10),
col175 varchar2(10),
col176 varchar2(10),
col177 varchar2(10),
col178 varchar2(10),
col179 varchar2(10),
col180 varchar2(10),
col181 varchar2(10),
col182 varchar2(10),
col183 varchar2(10),
col184 varchar2(10),
col185 varchar2(10),
col186 varchar2(10),
col187 varchar2(10),
col188 varchar2(10),
col189 varchar2(10),
col190 varchar2(10),
col191 varchar2(10),
col192 varchar2(10),
col193 varchar2(10),
col194 varchar2(10),
col195 varchar2(10),
col196 varchar2(10),
col197 varchar2(10),
col198 varchar2(10),
col199 varchar2(10),
col200 varchar2(10),
col201 varchar2(10),
col202 varchar2(10),
col203 varchar2(10),
col204 varchar2(10),
col205 varchar2(10),
col206 varchar2(10),
col207 varchar2(10),
col208 varchar2(10),
col209 varchar2(10),
col210 varchar2(10),
col211 varchar2(10),
col212 varchar2(10),
col213 varchar2(10),
col214 varchar2(10),
col215 varchar2(10),
col216 varchar2(10),
col217 varchar2(10),
col218 varchar2(10),
col219 varchar2(10),
col220 varchar2(10),
col221 varchar2(10),
col222 varchar2(10),
col223 varchar2(10),
col224 varchar2(10),
col225 varchar2(10),
col226 varchar2(10),
col227 varchar2(10),
col228 varchar2(10),
col229 varchar2(10),
col230 varchar2(10),
col231 varchar2(10),
col232 varchar2(10),
col233 varchar2(10),
col234 varchar2(10),
col235 varchar2(10),
col236 varchar2(10),
col237 varchar2(10),
col238 varchar2(10),
col239 varchar2(10),
col240 varchar2(10),
col241 varchar2(10),
col242 varchar2(10),
col243 varchar2(10),
col244 varchar2(10),
col245 varchar2(10),
col246 varchar2(10),
col247 varchar2(10),
col248 varchar2(10),
col249 varchar2(10),
col250 varchar2(10),
col251 varchar2(10),
col252 varchar2(10),
col253 varchar2(10),
col254 varchar2(10),
col255 varchar2(10),
col256 varchar2(10),
col257 varchar2(10),
col258 varchar2(10),
col259 varchar2(10),
col260 varchar2(10),
col261 varchar2(10),
col262 varchar2(10),
col263 varchar2(10),
col264 varchar2(10),
col265 varchar2(10),
col266 varchar2(10),
col267 varchar2(10),
col268 varchar2(10),
col269 varchar2(10),
col270 varchar2(10),
col271 varchar2(10),
col272 varchar2(10),
col273 varchar2(10),
col274 varchar2(10),
col275 varchar2(10),
col276 varchar2(10),
col277 varchar2(10),
col278 varchar2(10),
col279 varchar2(10),
col280 varchar2(10),
col281 varchar2(10),
col282 varchar2(10),
col283 varchar2(10),
col284 varchar2(10),
col285 varchar2(10),
col286 varchar2(10),
col287 varchar2(10),
col288 varchar2(10),
col289 varchar2(10),
col290 varchar2(10),
col291 varchar2(10),
col292 varchar2(10),
col293 varchar2(10),
col294 varchar2(10),
col295 varchar2(10),
col296 varchar2(10),
col297 varchar2(10),
col298 varchar2(10),
col299 varchar2(10),
col300 varchar2(10),
col301 varchar2(10),
col302 varchar2(10),
col303 varchar2(10),
col304 varchar2(10),
col305 varchar2(10),
col306 varchar2(10),
col307 varchar2(10),
col308 varchar2(10),
col309 varchar2(10),
col310 varchar2(10),
col311 varchar2(10),
col312 varchar2(10),
col313 varchar2(10),
col314 varchar2(10),
col315 varchar2(10),
col316 varchar2(10),
col317 varchar2(10),
col318 varchar2(10),
col319 varchar2(10),
col320 varchar2(10),
col321 varchar2(10),
col322 varchar2(10),
col323 varchar2(10),
col324 varchar2(10),
col325 varchar2(10),
col326 varchar2(10),
col327 varchar2(10),
col328 varchar2(10),
col329 varchar2(10),
col330 varchar2(10),
col331 varchar2(10),
col332 varchar2(10),
col333 varchar2(10),
col334 varchar2(10),
col335 varchar2(10),
col336 varchar2(10),
col337 varchar2(10),
col338 varchar2(10),
col339 varchar2(10),
col340 varchar2(10),
col341 varchar2(10),
col342 varchar2(10),
col343 varchar2(10),
col344 varchar2(10),
col345 varchar2(10),
col346 varchar2(10),
col347 varchar2(10),
col348 varchar2(10),
col349 varchar2(10),
col350 varchar2(10),
col351 varchar2(10),
col352 varchar2(10),
col353 varchar2(10),
col354 varchar2(10),
col355 varchar2(10),
col356 varchar2(10),
col357 varchar2(10),
col358 varchar2(10),
col359 varchar2(10),
col360 varchar2(10),
col361 varchar2(10),
col362 varchar2(10),
col363 varchar2(10),
col364 varchar2(10),
col365 varchar2(10),
col366 varchar2(10),
col367 varchar2(10),
col368 varchar2(10),
col369 varchar2(10),
col370 varchar2(10),
col371 varchar2(10),
col372 varchar2(10),
col373 varchar2(10),
col374 varchar2(10),
col375 varchar2(10),
col376 varchar2(10),
col377 varchar2(10),
col378 varchar2(10),
col379 varchar2(10),
col380 varchar2(10),
col381 varchar2(10),
col382 varchar2(10),
col383 varchar2(10),
col384 varchar2(10),
col385 varchar2(10),
col386 varchar2(10),
col387 varchar2(10),
col388 varchar2(10),
col389 varchar2(10),
col390 varchar2(10),
col391 varchar2(10),
col392 varchar2(10),
col393 varchar2(10),
col394 varchar2(10),
col395 varchar2(10),
col396 varchar2(10),
col397 varchar2(10),
col398 varchar2(10),
col399 varchar2(10),
col400 varchar2(10),
col401 varchar2(10),
col402 varchar2(10),
col403 varchar2(10),
col404 varchar2(10),
col405 varchar2(10),
col406 varchar2(10),
col407 varchar2(10),
col408 varchar2(10),
col409 varchar2(10),
col410 varchar2(10),
col411 varchar2(10),
col412 varchar2(10),
col413 varchar2(10),
col414 varchar2(10),
col415 varchar2(10),
col416 varchar2(10),
col417 varchar2(10),
col418 varchar2(10),
col419 varchar2(10),
col420 varchar2(10),
col421 varchar2(10),
col422 varchar2(10),
col423 varchar2(10),
col424 varchar2(10),
col425 varchar2(10),
col426 varchar2(10),
col427 varchar2(10),
col428 varchar2(10),
col429 varchar2(10),
col430 varchar2(10),
col431 varchar2(10),
col432 varchar2(10),
col433 varchar2(10),
col434 varchar2(10),
col435 varchar2(10),
col436 varchar2(10),
col437 varchar2(10),
col438 varchar2(10),
col439 varchar2(10),
col440 varchar2(10),
col441 varchar2(10),
col442 varchar2(10),
col443 varchar2(10),
col444 varchar2(10),
col445 varchar2(10),
col446 varchar2(10),
col447 varchar2(10),
col448 varchar2(10),
col449 varchar2(10),
col450 varchar2(10),
col451 varchar2(10),
col452 varchar2(10),
col453 varchar2(10),
col454 varchar2(10),
col455 varchar2(10),
col456 varchar2(10),
col457 varchar2(10),
col458 varchar2(10),
col459 varchar2(10),
col460 varchar2(10),
col461 varchar2(10),
col462 varchar2(10),
col463 varchar2(10),
col464 varchar2(10),
col465 varchar2(10),
col466 varchar2(10),
col467 varchar2(10),
col468 varchar2(10),
col469 varchar2(10),
col470 varchar2(10),
col471 varchar2(10),
col472 varchar2(10),
col473 varchar2(10),
col474 varchar2(10),
col475 varchar2(10),
col476 varchar2(10),
col477 varchar2(10),
col478 varchar2(10),
col479 varchar2(10),
col480 varchar2(10),
col481 varchar2(10),
col482 varchar2(10),
col483 varchar2(10),
col484 varchar2(10),
col485 varchar2(10),
col486 varchar2(10),
col487 varchar2(10),
col488 varchar2(10),
col489 varchar2(10),
col490 varchar2(10),
col491 varchar2(10),
col492 varchar2(10),
col493 varchar2(10),
col494 varchar2(10),
col495 varchar2(10),
col496 varchar2(10),
col497 varchar2(10),
col498 varchar2(10),
col499 varchar2(10),
col500 varchar2(10),
col501 varchar2(10),
col502 varchar2(10),
col503 varchar2(10),
col504 varchar2(10),
col505 varchar2(10),
col506 varchar2(10),
col507 varchar2(10),
col508 varchar2(10),
col509 varchar2(10),
col510 varchar2(10),
col511 varchar2(10),
col512 varchar2(10),
col513 varchar2(10),
col514 varchar2(10),
col515 varchar2(10),
col516 varchar2(10),
col517 varchar2(10),
col518 varchar2(10),
col519 varchar2(10),
col520 varchar2(10),
col521 varchar2(10),
col522 varchar2(10),
col523 varchar2(10),
col524 varchar2(10),
col525 varchar2(10),
col526 varchar2(10),
col527 varchar2(10),
col528 varchar2(10),
col529 varchar2(10),
col530 varchar2(10),
col531 varchar2(10),
col532 varchar2(10),
col533 varchar2(10),
col534 varchar2(10),
col535 varchar2(10),
col536 varchar2(10),
col537 varchar2(10),
col538 varchar2(10),
col539 varchar2(10),
col540 varchar2(10),
col541 varchar2(10),
col542 varchar2(10),
col543 varchar2(10),
col544 varchar2(10),
col545 varchar2(10),
col546 varchar2(10),
col547 varchar2(10),
col548 varchar2(10),
col549 varchar2(10),
col550 varchar2(10),
col551 varchar2(10),
col552 varchar2(10),
col553 varchar2(10),
col554 varchar2(10),
col555 varchar2(10),
col556 varchar2(10),
col557 varchar2(10),
col558 varchar2(10),
col559 varchar2(10),
col560 varchar2(10),
col561 varchar2(10),
col562 varchar2(10),
col563 varchar2(10),
col564 varchar2(10),
col565 varchar2(10),
col566 varchar2(10),
col567 varchar2(10),
col568 varchar2(10),
col569 varchar2(10),
col570 varchar2(10),
col571 varchar2(10),
col572 varchar2(10),
col573 varchar2(10),
col574 varchar2(10),
col575 varchar2(10),
col576 varchar2(10),
col577 varchar2(10),
col578 varchar2(10),
col579 varchar2(10),
col580 varchar2(10),
col581 varchar2(10),
col582 varchar2(10),
col583 varchar2(10),
col584 varchar2(10),
col585 varchar2(10),
col586 varchar2(10),
col587 varchar2(10),
col588 varchar2(10),
col589 varchar2(10),
col590 varchar2(10),
col591 varchar2(10),
col592 varchar2(10),
col593 varchar2(10),
col594 varchar2(10),
col595 varchar2(10),
col596 varchar2(10),
col597 varchar2(10),
col598 varchar2(10),
col599 varchar2(10),
col600 varchar2(10),
col601 varchar2(10),
col602 varchar2(10),
col603 varchar2(10),
col604 varchar2(10),
col605 varchar2(10),
col606 varchar2(10),
col607 varchar2(10),
col608 varchar2(10),
col609 varchar2(10),
col610 varchar2(10),
col611 varchar2(10),
col612 varchar2(10),
col613 varchar2(10),
col614 varchar2(10),
col615 varchar2(10),
col616 varchar2(10),
col617 varchar2(10),
col618 varchar2(10),
col619 varchar2(10),
col620 varchar2(10),
col621 varchar2(10),
col622 varchar2(10),
col623 varchar2(10),
col624 varchar2(10),
col625 varchar2(10),
col626 varchar2(10),
col627 varchar2(10),
col628 varchar2(10),
col629 varchar2(10),
col630 varchar2(10),
col631 varchar2(10),
col632 varchar2(10),
col633 varchar2(10),
col634 varchar2(10),
col635 varchar2(10),
col636 varchar2(10),
col637 varchar2(10),
col638 varchar2(10),
col639 varchar2(10),
col640 varchar2(10),
col641 varchar2(10),
col642 varchar2(10),
col643 varchar2(10),
col644 varchar2(10),
col645 varchar2(10),
col646 varchar2(10),
col647 varchar2(10),
col648 varchar2(10),
col649 varchar2(10),
col650 varchar2(10),
col651 varchar2(10),
col652 varchar2(10),
col653 varchar2(10),
col654 varchar2(10),
col655 varchar2(10),
col656 varchar2(10),
col657 varchar2(10),
col658 varchar2(10),
col659 varchar2(10),
col660 varchar2(10),
col661 varchar2(10),
col662 varchar2(10),
col663 varchar2(10),
col664 varchar2(10),
col665 varchar2(10),
col666 varchar2(10),
col667 varchar2(10),
col668 varchar2(10),
col669 varchar2(10),
col670 varchar2(10),
col671 varchar2(10),
col672 varchar2(10),
col673 varchar2(10),
col674 varchar2(10),
col675 varchar2(10),
col676 varchar2(10),
col677 varchar2(10),
col678 varchar2(10),
col679 varchar2(10),
col680 varchar2(10),
col681 varchar2(10),
col682 varchar2(10),
col683 varchar2(10),
col684 varchar2(10),
col685 varchar2(10),
col686 varchar2(10),
col687 varchar2(10),
col688 varchar2(10),
col689 varchar2(10),
col690 varchar2(10),
col691 varchar2(10),
col692 varchar2(10),
col693 varchar2(10),
col694 varchar2(10),
col695 varchar2(10),
col696 varchar2(10),
col697 varchar2(10),
col698 varchar2(10),
col699 varchar2(10),
col700 varchar2(10),
col701 varchar2(10),
col702 varchar2(10),
col703 varchar2(10),
col704 varchar2(10),
col705 varchar2(10),
col706 varchar2(10),
col707 varchar2(10),
col708 varchar2(10),
col709 varchar2(10),
col710 varchar2(10),
col711 varchar2(10),
col712 varchar2(10),
col713 varchar2(10),
col714 varchar2(10),
col715 varchar2(10),
col716 varchar2(10),
col717 varchar2(10),
col718 varchar2(10),
col719 varchar2(10),
col720 varchar2(10),
col721 varchar2(10),
col722 varchar2(10),
col723 varchar2(10),
col724 varchar2(10),
col725 varchar2(10),
col726 varchar2(10),
col727 varchar2(10),
col728 varchar2(10),
col729 varchar2(10),
col730 varchar2(10),
col731 varchar2(10),
col732 varchar2(10),
col733 varchar2(10),
col734 varchar2(10),
col735 varchar2(10),
col736 varchar2(10),
col737 varchar2(10),
col738 varchar2(10),
col739 varchar2(10),
col740 varchar2(10),
col741 varchar2(10),
col742 varchar2(10),
col743 varchar2(10),
col744 varchar2(10),
col745 varchar2(10),
col746 varchar2(10),
col747 varchar2(10),
col748 varchar2(10),
col749 varchar2(10),
col750 varchar2(10),
col751 varchar2(10),
col752 varchar2(10),
col753 varchar2(10),
col754 varchar2(10),
col755 varchar2(10),
col756 varchar2(10),
col757 varchar2(10),
col758 varchar2(10),
col759 varchar2(10),
col760 varchar2(10),
col761 varchar2(10),
col762 varchar2(10),
col763 varchar2(10),
col764 varchar2(10),
col765 varchar2(10),
col766 varchar2(10),
col767 varchar2(10),
col768 varchar2(10),
col769 varchar2(10),
col770 varchar2(10),
col771 varchar2(10),
col772 varchar2(10),
col773 varchar2(10),
col774 varchar2(10),
col775 varchar2(10),
col776 varchar2(10),
col777 varchar2(10),
col778 varchar2(10),
col779 varchar2(10),
col780 varchar2(10),
col781 varchar2(10),
col782 varchar2(10),
col783 varchar2(10),
col784 varchar2(10),
col785 varchar2(10),
col786 varchar2(10),
col787 varchar2(10),
col788 varchar2(10),
col789 varchar2(10),
col790 varchar2(10),
col791 varchar2(10),
col792 varchar2(10),
col793 varchar2(10),
col794 varchar2(10),
col795 varchar2(10),
col796 varchar2(10),
col797 varchar2(10),
col798 varchar2(10),
col799 varchar2(10),
col800 varchar2(10),
col801 varchar2(10),
col802 varchar2(10),
col803 varchar2(10),
col804 varchar2(10),
col805 varchar2(10),
col806 varchar2(10),
col807 varchar2(10),
col808 varchar2(10),
col809 varchar2(10),
col810 varchar2(10),
col811 varchar2(10),
col812 varchar2(10),
col813 varchar2(10),
col814 varchar2(10),
col815 varchar2(10),
col816 varchar2(10),
col817 varchar2(10),
col818 varchar2(10),
col819 varchar2(10),
col820 varchar2(10),
col821 varchar2(10),
col822 varchar2(10),
col823 varchar2(10),
col824 varchar2(10),
col825 varchar2(10),
col826 varchar2(10),
col827 varchar2(10),
col828 varchar2(10),
col829 varchar2(10),
col830 varchar2(10),
col831 varchar2(10),
col832 varchar2(10),
col833 varchar2(10),
col834 varchar2(10),
col835 varchar2(10),
col836 varchar2(10),
col837 varchar2(10),
col838 varchar2(10),
col839 varchar2(10),
col840 varchar2(10),
col841 varchar2(10),
col842 varchar2(10),
col843 varchar2(10),
col844 varchar2(10),
col845 varchar2(10),
col846 varchar2(10),
col847 varchar2(10),
col848 varchar2(10),
col849 varchar2(10),
col850 varchar2(10),
col851 varchar2(10),
col852 varchar2(10),
col853 varchar2(10),
col854 varchar2(10),
col855 varchar2(10),
col856 varchar2(10),
col857 varchar2(10),
col858 varchar2(10),
col859 varchar2(10),
col860 varchar2(10),
col861 varchar2(10),
col862 varchar2(10),
col863 varchar2(10),
col864 varchar2(10),
col865 varchar2(10),
col866 varchar2(10),
col867 varchar2(10),
col868 varchar2(10),
col869 varchar2(10),
col870 varchar2(10),
col871 varchar2(10),
col872 varchar2(10),
col873 varchar2(10),
col874 varchar2(10),
col875 varchar2(10),
col876 varchar2(10),
col877 varchar2(10),
col878 varchar2(10),
col879 varchar2(10),
col880 varchar2(10),
col881 varchar2(10),
col882 varchar2(10),
col883 varchar2(10),
col884 varchar2(10),
col885 varchar2(10),
col886 varchar2(10),
col887 varchar2(10),
col888 varchar2(10),
col889 varchar2(10),
col890 varchar2(10),
col891 varchar2(10),
col892 varchar2(10),
col893 varchar2(10),
col894 varchar2(10),
col895 varchar2(10),
col896 varchar2(10),
col897 varchar2(10),
col898 varchar2(10),
col899 varchar2(10),
col900 varchar2(10),
col901 varchar2(10),
col902 varchar2(10),
col903 varchar2(10),
col904 varchar2(10),
col905 varchar2(10),
col906 varchar2(10),
col907 varchar2(10),
col908 varchar2(10),
col909 varchar2(10),
col910 varchar2(10),
col911 varchar2(10),
col912 varchar2(10),
col913 varchar2(10),
col914 varchar2(10),
col915 varchar2(10),
col916 varchar2(10),
col917 varchar2(10),
col918 varchar2(10),
col919 varchar2(10),
col920 varchar2(10),
col921 varchar2(10),
col922 varchar2(10),
col923 varchar2(10),
col924 varchar2(10),
col925 varchar2(10),
col926 varchar2(10),
col927 varchar2(10),
col928 varchar2(10),
col929 varchar2(10),
col930 varchar2(10),
col931 varchar2(10),
col932 varchar2(10),
col933 varchar2(10),
col934 varchar2(10),
col935 varchar2(10),
col936 varchar2(10),
col937 varchar2(10),
col938 varchar2(10),
col939 varchar2(10),
col940 varchar2(10),
col941 varchar2(10),
col942 varchar2(10),
col943 varchar2(10),
col944 varchar2(10),
col945 varchar2(10),
col946 varchar2(10),
col947 varchar2(10),
col948 varchar2(10),
col949 varchar2(10),
col950 varchar2(10),
col951 varchar2(10),
col952 varchar2(10),
col953 varchar2(10),
col954 varchar2(10),
col955 varchar2(10),
col956 varchar2(10),
col957 varchar2(10),
col958 varchar2(10),
col959 varchar2(10),
col960 varchar2(10),
col961 varchar2(10),
col962 varchar2(10),
col963 varchar2(10),
col964 varchar2(10),
col965 varchar2(10),
col966 varchar2(10),
col967 varchar2(10),
col968 varchar2(10),
col969 varchar2(10),
col970 varchar2(10),
col971 varchar2(10),
col972 varchar2(10),
col973 varchar2(10),
col974 varchar2(10),
col975 varchar2(10),
col976 varchar2(10),
col977 varchar2(10),
col978 varchar2(10),
col979 varchar2(10),
col980 varchar2(10),
col981 varchar2(10),
col982 varchar2(10),
col983 varchar2(10),
col984 varchar2(10),
col985 varchar2(10),
col986 varchar2(10),
col987 varchar2(10),
col988 varchar2(10),
col989 varchar2(10),
col990 varchar2(10),
col991 varchar2(10),
col992 varchar2(10),
col993 varchar2(10),
col994 varchar2(10),
col995 varchar2(10),
col996 varchar2(10),
col997 varchar2(10),
col998 varchar2(10),
col999 varchar2(10),
col1000 varchar2(10)
);

declare
  s_sql1 varchar2(32767);
  s_sql2 varchar2(32767);
begin
  for i in 1..1000 loop
    s_sql1 := s_sql1 || 'col' || to_char(i, 'FM9000') || ',';
    s_sql2 := s_sql2 || '''BLABLA'',';
  end loop;

  s_sql1 := rtrim(s_sql1, ',');
  s_sql2 := rtrim(s_sql2, ',');

  s_sql1 := 'insert into many_cols (' || s_sql1 || ') values (' || s_sql2 || ')';

  for i in 1..10000 loop
    execute immediate s_sql1;
  end loop;

  commit;
end;
/

select count(*) from many_cols;

exec dbms_stats.gather_table_stats(null, 'many_cols')

select blocks from user_tab_statistics where table_name = 'MANY_COLS';

-- alter session set "_widetab_comp_enabled" = false;

alter table many_cols move compress basic;

exec dbms_stats.gather_table_stats(null, 'many_cols')

select blocks from user_tab_statistics where table_name = 'MANY_COLS';

Wednesday, May 1, 2019

I/O Benchmark Minor Update

I've recently published a new version 1.03 of the I/O benchmark scripts on my github repository (ideally pick the IO_BENCHMARK.ZIP containing all the scripts). The original post including some more instructions can be found here, and there is also a video on my Youtube channel explaining how to use the benchmark scripts.

The main change is a new version of the "Write IOPS" benchmark that should scale much better than the older version.

There are now actually two variants of the "max_write_iops_benchmark_slave.sql" script. The currently used one is based on a batch SQL update whereas the "max_write_iops_benchmark_slave_forall.sql" script uses a PL/SQL FORALL update approach to achieve the same. In my tests the two performed quite similarly, but I've decided to include both so you can test which one works better for you - just rename the scripts accordingly.


In order to max out "Write IOPS" I suggest you create objects that are smaller than the corresponding buffer cache so can be cached entirely and set FAST_START_MTTR_TARGET to 1 to maximize the write pressure on the DBWR process(es). The Online Redo Logs should be sized adequately in order to avoid bottlenecks in that area. The script is designed to minimize redo generation and maximize the number of blocks modified that have then to be written by DBWR.


You could still run the script in a mixed read/write IOPS mode if you create objects larger than the buffer cache - in which case there can be additional pressure on the DBWR if there are no free buffers to read in new blocks ("free buffer waits"). I've also already used successfully both the "Read IOPS" and "Write IOPS" benchmark scripts simultaneously to maximize both, read and write IOPS.


There is still the problem at least in 12.2 (and I think it's still there in 18c but not entirely sure off the top of my head) that the PDB level AWR reports don't cover properly the DBWR "Write I/O" related figures, so although the script specifically detects that it runs on 12.2+ and on PDB level and creates AWR PDB reports accordingly you won't get any useful "Write IOPS" results and would have to either run the benchmark on CDB level or create CDB level AWR snapshots accordingly.


The interactive scripts now also echo the command finally called to execute the actual benchmark script, which can be helpful if you don't want to go through the interactive script again and again for repeated executions. Of course you still would need to take care of dropping / creating / keeping a suitable schema where to execute the benchmark, and maybe also modify the scripts that they don't keep creating and dropping the objects if you want have multiple runs with same object size / concurrency settings.

I'm thinking about a version 2.0 of the I/O benchmark scripts that should be improved in various aspects - the "interactive" script should become much more user friendly with improved defaults and abbreviations that can be entered, and more options like keeping the schema / objects. Also the measurement of the IOPS / throughput should be improved by monitoring the figures continuously which should provide a much better picture of the performance over time (varying IOPS rates for example). The core I/O benchmark scripts seem to be working pretty well (now that the write IOPS is improved) so I don't see much need for improvement there. Maybe an option to execute the benchmark in a kind of loop with increasing object sizes / concurrency level might also be useful.