Often you can read (mostly unqualified) rants in various places and forums about the Cost Based Optimizer how stupid, unpredictable etc. it seems to be.
So I think it's time to demonstrate how clever the optimizer sometimes can be.
Consider the following setup:
drop table t_opt_clever purge;
-- Use PCTFREE 99 so that only one row per (leaf) block
-- This can tell us how many "rows" had to be inspected
-- by checking the number of (leaf) blocks accessed
-- Unfortunately Oracle (usually) doesn't provide the information
-- how many rows have been accessed in the execution plan,
-- but only how many rows are returned by an operation
create table t_opt_clever (
id not null constraint pk_opt_clever primary key,
col1 not null,
col2 not null,
col3 not null,
col4 not null,
col5 not null,
filler
)
pctfree 99
pctused 1
as
select
level as id
, round(dbms_random.value(0, 200)) as col1
, round(dbms_random.value(0, 400)) as col2
, case
when level <= 666
then 'FIRST_BUCKET'
when level <= 833
then 'SECOND_BUCKET'
when level <= 1000
then 'THIRD_BUCKET'
end as col3
, round(dbms_random.value(0, 600)) as col4
, round(dbms_random.value(0, 800)) as col5
, rpad('x', 100, 'x') as filler
from
dual
connect by
level <= 1000;
create index idx_opt_clever1 on t_opt_clever (col5, col1, col4, col2) pctfree 99 compute statistics;
create index idx_opt_clever2 on t_opt_clever (col5, col1, col3, col4, col2) pctfree 99 compute statistics;
exec dbms_stats.gather_table_stats(null, 'T_OPT_CLEVER')
-- scale the table and index by factor 1000
exec dbms_stats.set_table_stats(null, 'T_OPT_CLEVER', numrows => 1000000, numblks => 30000)
exec dbms_stats.set_index_stats(null, 'PK_OPT_CLEVER', numrows=> 1000000, numlblks => 2000, numdist=>1000000, clstfct => 100000, indlevel => 3)
exec dbms_stats.set_index_stats(null, 'IDX_OPT_CLEVER1', numrows=> 1000000, numlblks => 14000, numdist=>1000000, clstfct => 1000000, indlevel => 3)
exec dbms_stats.set_index_stats(null, 'IDX_OPT_CLEVER2', numrows=> 1000000, numlblks => 16000, numdist=>1000000, clstfct => 1000000, indlevel => 3)
Basically this simulates a 1,000,000 rows table with two suboptimal indexes given the following Top 100 query:
-- Now which index can be efficiently used by the optimizer?
select
*
from (
select
*
from
t_opt_clever
where
col3 = 'FIRST_BUCKET'
order by
col3, col5, col1, col4, col2
)
where
rownum <= 100;
Now what do you think, can one of these indexes efficiently be used by the optimizer, and if yes, which one?
At first sight both indexes can't be used to satisfy the requested sort order to avoid a costly full scan of data and a corresponding SORT ORDER BY (STOPKEY) operation, and can't be used efficiently to filter the data because the filter predicate is not among the leading columns.
Let's check the result:
SQL> select * from table(dbms_xplan.display_cursor(null, null, '+COST ALLSTATS LAST'));
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID fz6vky8n5a3xq, child number 0
-------------------------------------
select * from ( select * from t_opt_clever where
col3 = 'FIRST_BUCKET' order by col3, col5, col1, col4, col2 ) where
rownum <= 100
Plan hash value: 4203008252
---------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
---------------------------------------------------------------------------------------------------------------------------------
|* 1 | COUNT STOPKEY | | 1 | | | 100 |00:00:00.29 | 256 | 100 |
| 2 | VIEW | | 1 | 101 | 109 (0)| 100 |00:00:00.29 | 256 | 100 |
| 3 | TABLE ACCESS BY INDEX ROWID| T_OPT_CLEVER | 1 | 333K| 109 (0)| 100 |00:00:00.29 | 256 | 100 |
|* 4 | INDEX FULL SCAN | IDX_OPT_CLEVER2 | 1 | 101 | 8 (0)| 100 |00:00:00.01 | 156 | 0 |
---------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(ROWNUM<=100)
4 - access("COL3"='FIRST_BUCKET')
filter("COL3"='FIRST_BUCKET')
24 rows selected.
That is quite interesting, the index IDX_OPT_CLEVER2 is used and no SORT ORDER BY operation can be found in the execution plan, although the index doesn't match the requested sort order. And here comes the cleverness of the optimizer: It recognizes that due to the filter predicate on COL3 this index can actually be used to satisfy the sort order because it is not relevant for the resulting order since COL3 will always be the constant value of the filter predicate. And the same applies to IDX_OPT_CLEVER1, by the way.
But IDX_OPT_CLEVER2 is more efficient than using IDX_OPT_CLEVER1 because the filter predicate can be evaluated on the index data already eliminating some of the rows before visiting the table. Depending on the clustering factor this can make a significant difference to the cost of the operation, since random row accesses to table rows potentially require to access a different block per row.
This can be seen when forcing the usage of IDX_OPT_CLEVER1:
SQL> select * from table(dbms_xplan.display_cursor(null, null, '+COST ALLSTATS LAST'));
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 5tgmgfvyyx6z6, child number 0
-------------------------------------
select * from ( select /*+ index(t_opt_clever idx_opt_clever1) */ * from
t_opt_clever where col3 = 'FIRST_BUCKET' order by col3,
col5, col1, col4, col2 ) where rownum <= 100
Plan hash value: 678132971
---------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
---------------------------------------------------------------------------------------------------------------------------------
|* 1 | COUNT STOPKEY | | 1 | | | 100 |00:00:00.20 | 310 | 54 |
| 2 | VIEW | | 1 | 101 | 312 (1)| 100 |00:00:00.20 | 310 | 54 |
|* 3 | TABLE ACCESS BY INDEX ROWID| T_OPT_CLEVER | 1 | 101 | 312 (1)| 100 |00:00:00.20 | 310 | 54 |
| 4 | INDEX FULL SCAN | IDX_OPT_CLEVER1 | 1 | 1000K| 8 (0)| 154 |00:00:00.01 | 156 | 0 |
---------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(ROWNUM<=100)
3 - filter("COL3"='FIRST_BUCKET')
23 rows selected.
Two things can be seen here:
1. The optimizer is again smart and is able to avoid the SORT ORDER BY operation, because the index IDX_OPT_CLEVER1 can also be used to return in the data in the requested order, again because COL3 is constant.
2. Using IDX_OPT_CLEVER1 is less efficient because more table rows have to be visited to apply the filter predicate.
The fact that the indexes can only be used efficiently under this special circumstance can be verified by changing the filter predicate so that COL3 can have more than a single value and therefore it's no longer possible to avoid an ORDER BY operation:
-- Change the filter predicate and force index
select
*
from (
select /*+ index(t_opt_clever idx_opt_clever2) */
*
from
t_opt_clever
where
col3 in ('FIRST_BUCKET', 'SECOND_BUCKET')
order by
col5, col1, col4, col2
)
where
rownum <= 100;
SQL> select * from table(dbms_xplan.display_cursor(null, null, '+COST ALLSTATS LAST'));
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID axr6u0yvdk50f, child number 0
-------------------------------------
select * from ( select /*+ index(t_opt_clever idx_opt_clever2) */ * from
t_opt_clever where col3 in ('FIRST_BUCKET', 'SECOND_BUCKET') order by col3, col5, col1,
col4, col2 ) where rownum <= 100
Plan hash value: 2229390605
----------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------
|* 1 | COUNT STOPKEY | | 1 | | | 100 |00:00:00.02 | 1835 | | | |
| 2 | VIEW | | 1 | 666K| 703K (1)| 100 |00:00:00.02 | 1835 | | | |
|* 3 | SORT ORDER BY STOPKEY | | 1 | 666K| 703K (1)| 100 |00:00:00.02 | 1835 | 20480 | 20480 |18432 (0)|
| 4 | TABLE ACCESS BY INDEX ROWID| T_OPT_CLEVER | 1 | 666K| 683K (1)| 833 |00:00:00.01 | 1835 | | | |
|* 5 | INDEX FULL SCAN | IDX_OPT_CLEVER2 | 1 | 666K| 16100 (1)| 833 |00:00:00.01 | 1002 | | | |
----------------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(ROWNUM<=100)
3 - filter(ROWNUM<=100)
5 - filter(("COL3"='FIRST_BUCKET' OR "COL3"='SECOND_BUCKET'))
25 rows selected.
Without the index hint the optimizer chooses a full table scan. Forcing e.g. the index IDX_OPT_CLEVER2 shows that indeed all rows had to be processed first and additionally a sort operation was necessary.
So it's interesting to note that the optimizer recognizes special cases where single value predicates allow an index usage that otherwise wouldn't be possible. This is a nice move, since it allows to perform above query in quite an efficient manner although the setup is suboptimal (e.g. a different index with COL3 as leading column or an appropriate IOT could be more suitable, depending on what else is done with the table). Under these (simulated) circumstances this optimization makes quite a difference compared to the otherwise only possible full table scan operation of a 30,000 blocks table.
By the way, above results could be reproduced on 10.2.0.4 and 11.1.0.7 Win32 using default system statistics and an 8KB LMT MSSM tablespace.