I've been asked and agreed to contribute to "AllThingsOracle.com", a service set up by RedGate software for Oracle users.
I'll publish there complete articles not found on my blog and also sometimes just links to my blog. Furthermore I also plan to contribute some webinars, so stay tuned. In the meanwhile you may want to have a look at the archived video material already available where you for example can watch webinars featuring Cary Millsap and others.
Whereas on my blog I often focus on highlighting edge cases my articles and webinars on AllThingsOracle.com are more targeted towards explaining fundamentals.
My first article has just been posted there and it's the initial part of a multi-part series highlighting different aspects of Dynamic Sampling.
Monday, January 30, 2012
Thursday, January 26, 2012
Autotrace Polluting The Shared Pool?
Introduction
Another random note that I made during the sessions attended at OOW was about the SQL*Plus AUTOTRACE feature. As you're hopefully already aware of this feature has some significant shortcomings, the most obvious being that it doesn't pull the actual execution plan from the Shared Pool after executing the statement but simply runs an EXPLAIN PLAN on the SQL text which might produce an execution plan that is different from the actual one for various reasons.
Now the claim was made that in addition to these shortcomings the plan generated by the AUTOTRACE feature will stay in the Shared Pool and is eligible for sharing, which would mean that other statement executions could be affected by a potentially bad execution plan generated via AUTOTRACE rather then getting re-optimized on their own.
Now that claim initially struck me as odd because so far I was under the impression that the shortcoming of AUTOTRACE was the fact that it simply used the EXPLAIN PLAN facility to get the execution plan details - and I don't think that any plan generated by EXPLAIN PLAN is eligible for sharing with actual statement execution. After thinking about it for a while I realized however that there are some interesting side effects possible, but it depends on how you actually use AUTOTRACE.
Using Default AUTOTRACE
So in order to see what AUTOTRACE does behind the scenes I've decided to trace AUTOTRACE. Here is what I've tried:
set echo on timing on
alter session set tracefile_identifier = 'autotrace';
alter session set sql_trace = true;
set autotrace on
var n number
exec :n := 1
select * from dual where 1 = :n;
select * from dual where dummy = 'X';
And that's a snippet from the corresponding SQL trace file:
.
.
.
=====================
PARSING IN CURSOR #7 len=31 dep=0 uid=91 oct=3 lid=91 tim=651497870527 hv=868568466 ad='7ff0ce23638' sqlid='b9j0230twamck'
select * from dual where 1 = :n
END OF STMT
PARSE #7:c=0,e=460,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=651497870525
=====================
.
.
.
EXEC #7:c=0,e=1306,p=1,cr=3,cu=0,mis=0,r=0,dep=0,og=1,plh=3752461848,tim=651497871918
FETCH #7:c=0,e=654,p=2,cr=3,cu=0,mis=0,r=1,dep=0,og=1,plh=3752461848,tim=651497872660
STAT #7 id=1 cnt=1 pid=0 pos=1 obj=0 op='FILTER (cr=3 pr=2 pw=0 time=0 us)'
STAT #7 id=2 cnt=1 pid=1 pos=1 obj=116 op='TABLE ACCESS FULL DUAL (cr=3 pr=2 pw=0 time=0 us cost=2 size=2 card=1)'
FETCH #7:c=0,e=3,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=3752461848,tim=651497873015
CLOSE #7:c=0,e=16,dep=0,type=0,tim=651497876511
.
.
.
=====================
PARSING IN CURSOR #9 len=79 dep=0 uid=91 oct=3 lid=91 tim=651497880846 hv=3377064296 ad='7ff0ce196a8' sqlid='1tfgxbv4nmub8'
EXPLAIN PLAN SET STATEMENT_ID='PLUS6499083' FOR select * from dual where 1 = :n
END OF STMT
PARSE #9:c=0,e=583,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=651497880843
=====================
.
.
.
=====================
PARSING IN CURSOR #2 len=74 dep=0 uid=91 oct=3 lid=91 tim=651497888595 hv=920998108 ad='7ff0cdd8b00' sqlid='3s1hh8cvfan6w'
SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE', :1))
END OF STMT
PARSE #2:c=0,e=264,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=651497888593
=====================
.
.
.
PARSING IN CURSOR #7 len=36 dep=0 uid=91 oct=3 lid=91 tim=651498044006 hv=3267611628 ad='7ff0cdbd0f8' sqlid='4k6g7vr1c7kzc'
select * from dual where dummy = 'X'
END OF STMT
PARSE #7:c=0,e=1071,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=272002086,tim=651498044003
EXEC #7:c=0,e=34,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=272002086,tim=651498044138
FETCH #7:c=0,e=60,p=0,cr=3,cu=0,mis=0,r=1,dep=0,og=1,plh=272002086,tim=651498044289
STAT #7 id=1 cnt=1 pid=0 pos=1 obj=116 op='TABLE ACCESS FULL DUAL (cr=3 pr=0 pw=0 time=0 us cost=2 size=2 card=1)'
FETCH #7:c=0,e=2,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=272002086,tim=651498044616
CLOSE #7:c=0,e=28,dep=0,type=0,tim=651498062083
.
.
.
=====================
PARSING IN CURSOR #2 len=84 dep=0 uid=91 oct=50 lid=91 tim=651498073656 hv=290419607 ad='7ff0cdb8a28' sqlid='5jx46tw8nywwr'
EXPLAIN PLAN SET STATEMENT_ID='PLUS6499083' FOR select * from dual where dummy = 'X'
END OF STMT
PARSE #2:c=0,e=1295,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=272002086,tim=651498073653
=====================
.
.
.
=====================
PARSING IN CURSOR #9 len=74 dep=0 uid=91 oct=3 lid=91 tim=651498076015 hv=920998108 ad='7ff0cdd8b00' sqlid='3s1hh8cvfan6w'
SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE', :1))
END OF STMT
PARSE #9:c=0,e=254,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=651498076013
=====================
.
.
.
So that looks pretty much like the expected behaviour I've mentioned above - AUTOTRACE executes the statements and afterwards runs an EXPLAIN PLAN to show the execution plan.
As a side note it's interesting that the SQL trace doesn't contain the queries used to gather the delta of the session statistics. The reason is simple: They are not issued by this session. SQL*Plus establishes temporarily a second session for that purpose, using one of the modes provided by the OCI allowing to create a second session on the same connection / process. You can tell this by looking at the corresponding V$SESSION.PADDR resp. the entry in V$PROCESS: For both sessions the same process entry will be used (dedicated server model). By the way I've adopted the same approach for SQLTools++, the GUI that I maintain, for all activities that potentially could interfere with the main session, like collecting session statistics delta or calling DBMS_XPLAN.DISPLAY_CURSOR.
So when using AUTOTRACE in this way the only potential threat comes from the actual execution of the statement - but this is no different from executing a statement in any other way. Of course you'll appreciate that using an odd bind value in the execution as part of the AUTOTRACE activity could theoretically lead to issues with the shared usage of such a cursor afterwards - again this is nothing that is special to AUTOTRACE.
The potentially "wrong" execution plan that can be reported via the EXPLAIN PLAN cannot cause problems for other cursors, simply because it is generated via EXPLAIN PLAN. To make this point clear, here is another script that demonstrates:
- How AUTOTRACE can lie
- How EXPLAIN PLAN cursors are unshared by default
-- Demonstrate that AUTOTRACE can lie
set echo on linesize 200 pagesize 0 trimspool on tab off
drop table t;
purge table t;
create table t
as
select
rownum as id
, rpad('x', 100) as filler
from
dual
connect by
level <= 1000000
;
exec dbms_stats.gather_table_stats(null, 't')
create index t_idx on t (id);
-- Compare the execution plan
-- reported by AUTOTRACE
-- to the one reported by DBMS_XPLAN.DISPLAY_CURSOR
set autotrace on
var n number
exec :n := 500000
select /* FIND_ME */ * from (
select * from t where id > :n
)
where rownum > 1;
set autotrace off
select /* FIND_ME */ * from (
select * from t where id > :n
)
where rownum > 1;
select * from table(dbms_xplan.display_cursor(null, null));
-- Demonstrate that EXPLAIN PLAN cursors get special treatment
-- They are unshared by default
set echo off timing off feedback off long 1000000 longchunksize 1000000
spool %TEMP%\explain_plan_example.sql
select * from (
select
sql_fulltext
from
v$sqlstats
where
sql_text like 'EXPLAIN PLAN%/* FIND_ME */%rownum > 1%'
and sql_text not like '%v$sql%'
order by
last_active_time desc
)
where
rownum <= 1
;
spool off
-- Each execution of the same parent EXPLAIN PLAN cursor
-- leads to a new child cursor
set echo on feedback on timing on pagesize 999
@%TEMP%\explain_plan_example
/
/
column sql_id new_value sql_id
select * from (
select
sql_id
from
v$sqlstats
where
sql_text like 'EXPLAIN PLAN%/* FIND_ME */%rownum > 1%'
and sql_text not like '%v$sql%'
order by
last_active_time desc
)
where
rownum <= 1
;
select
sql_id
, child_number
, explain_plan_cursor
from
v$sql_shared_cursor
where
sql_id = '&sql_id';
set serveroutput on
@sql_shared_cursor &sql_id
So if you run this script you'll see an example where AUTOTRACE gets it wrong because the plan generated via EXPLAIN PLAN is different from the actual plan used. Furthermore the plan generated via EXPLAIN PLAN can only match other EXPLAIN PLAN cursors, and on top these are then unshared by default - so no threat to any other SQL issued possible.
Here's a sample output I got from 11.2.0.1:
SQL> -- Demonstrate that AUTOTRACE can lie
SQL> set echo on linesize 200 pagesize 0 trimspool on tab off
SQL>
SQL> drop table t;
Table dropped.
Elapsed: 00:00:00.03
SQL>
SQL> purge table t;
Table purged.
Elapsed: 00:00:00.04
SQL>
SQL> create table t
2 as
3 select
4 rownum as id
5 , rpad('x', 100) as filler
6 from
7 dual
8 connect by
9 level <= 1000000
10 ;
Table created.
Elapsed: 00:00:02.38
SQL>
SQL> exec dbms_stats.gather_table_stats(null, 't')
PL/SQL procedure successfully completed.
Elapsed: 00:00:02.40
SQL>
SQL> create index t_idx on t (id);
Index created.
Elapsed: 00:00:01.63
SQL>
SQL> -- Compare the execution plan
SQL> -- reported by AUTOTRACE
SQL> -- to the one reported by DBMS_XPLAN.DISPLAY_CURSOR
SQL> set autotrace on
SQL>
SQL> var n number
SQL>
SQL> exec :n := 500000
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.01
SQL>
SQL> select /* FIND_ME */ * from (
2 select * from t where id > :n
3 )
4 where rownum > 1;
no rows selected
Elapsed: 00:00:01.51
Execution Plan
----------------------------------------------------------
Plan hash value: 2383791439
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 50000 | 5175K| 162 (0)| 00:00:02 |
| 1 | COUNT | | | | | |
|* 2 | FILTER | | | | | |
| 3 | TABLE ACCESS BY INDEX ROWID| T | 50000 | 5175K| 162 (0)| 00:00:02 |
|* 4 | INDEX RANGE SCAN | T_IDX | 9000 | | 23 (0)| 00:00:01 |
---------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(ROWNUM>1)
4 - access("ID">TO_NUMBER(:N))
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
15390 consistent gets
15385 physical reads
0 redo size
304 bytes sent via SQL*Net to client
349 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
SQL>
SQL> set autotrace off
SQL>
SQL> select /* FIND_ME */ * from (
2 select * from t where id > :n
3 )
4 where rownum > 1;
no rows selected
Elapsed: 00:00:00.98
SQL>
SQL> select * from table(dbms_xplan.display_cursor(null, null));
SQL_ID 8q13ghbwgsmkv, child number 0
-------------------------------------
select /* FIND_ME */ * from ( select * from t where id > :n ) where
rownum > 1
Plan hash value: 4220795399
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 4204 (100)| |
| 1 | COUNT | | | | | |
|* 2 | FILTER | | | | | |
|* 3 | TABLE ACCESS FULL| T | 500K| 50M| 4204 (1)| 00:00:51 |
----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(ROWNUM>1)
3 - filter("ID">:N)
22 rows selected.
Elapsed: 00:00:00.12
SQL>
SQL> -- Demonstrate that EXPLAIN PLAN cursors get special treatment
SQL> -- They are unshared by default
SQL> set echo off timing off feedback off long 1000000 longchunksize 1000000
EXPLAIN PLAN SET STATEMENT_ID='PLUS6552708' FOR select /* FIND_ME */ * from (
select * from t where id > :n
)
where rownum > 1
SQL>
SQL> @%TEMP%\explain_plan_example
SQL> EXPLAIN PLAN SET STATEMENT_ID='PLUS6552708' FOR select /* FIND_ME */ * from (
2 select * from t where id > :n
3 )
4 where rownum > 1
5
SQL> /
Explained.
Elapsed: 00:00:00.00
SQL>
SQL> /
Explained.
Elapsed: 00:00:00.00
SQL>
SQL> column sql_id new_value sql_id
SQL>
SQL> select * from (
2 select
3 sql_id
4 from
5 v$sqlstats
6 where
7 sql_text like 'EXPLAIN PLAN%/* FIND_ME */%rownum > 1%'
8 and sql_text not like '%v$sql%'
9 order by
10 last_active_time desc
11 )
12 where
13 rownum <= 1
14 ;
SQL_ID
-------------
ctms62wkwp7nz
1 row selected.
Elapsed: 00:00:00.03
SQL>
SQL> select
2 sql_id
3 , child_number
4 , explain_plan_cursor
5 from
6 v$sql_shared_cursor
7 where
8 sql_id = '&sql_id';
SQL_ID CHILD_NUMBER E
------------- ------------ -
ctms62wkwp7nz 0 N
ctms62wkwp7nz 1 Y
ctms62wkwp7nz 2 Y
3 rows selected.
Elapsed: 00:00:00.03
SQL>
SQL> set serveroutput on
SQL>
SQL> @sql_shared_cursor &sql_id
SQL> declare
2 c number;
3 col_cnt number;
4 col_rec dbms_sql.desc_tab;
5 col_value varchar2(4000);
6 ret_val number;
7 begin
8 c := dbms_sql.open_cursor;
9 dbms_sql.parse(c,
10 'select q.sql_text, s.*
11 from v$sql_shared_cursor s, v$sql q
12 where s.sql_id = q.sql_id
13 and s.child_number = q.child_number
14 and q.sql_id = ''&1''',
15 dbms_sql.native);
16 dbms_sql.describe_columns(c, col_cnt, col_rec);
17
18 for idx in 1 .. col_cnt loop
19 dbms_sql.define_column(c, idx, col_value, 4000);
20 end loop;
21
22 ret_val := dbms_sql.execute(c);
23
24 while(dbms_sql.fetch_rows(c) > 0) loop
25 for idx in 1 .. col_cnt loop
26 dbms_sql.column_value(c, idx, col_value);
27 if col_rec(idx).col_name in ('SQL_ID', 'ADDRESS', 'CHILD_ADDRESS',
28 'CHILD_NUMBER', 'SQL_TEXT') then
29 dbms_output.put_line(rpad(col_rec(idx).col_name, 30) ||
30 ' = ' || col_value);
31 elsif col_value = 'Y' then
32 dbms_output.put_line(rpad(col_rec(idx).col_name, 30) ||
33 ' = ' || col_value);
34 end if;
35 end loop;
36 dbms_output.put_line('--------------------------------------------------');
37 end loop;
38
39 dbms_sql.close_cursor(c);
40 end;
41 /
SQL_TEXT = EXPLAIN PLAN SET STATEMENT_ID='PLUS6552708' FOR select /* FIND_ME */ * from ( select * from t where id > :n ) where rownum > 1
SQL_ID = ctms62wkwp7nz
ADDRESS = 000007FF0DD90180
CHILD_ADDRESS = 000007FF0DD87E70
CHILD_NUMBER = 0
--------------------------------------------------
SQL_TEXT = EXPLAIN PLAN SET STATEMENT_ID='PLUS6552708' FOR select /* FIND_ME */ * from ( select * from t where id > :n ) where rownum > 1
SQL_ID = ctms62wkwp7nz
ADDRESS = 000007FF0DD90180
CHILD_ADDRESS = 000007FF0DCD0D10
CHILD_NUMBER = 1
EXPLAIN_PLAN_CURSOR = Y
--------------------------------------------------
SQL_TEXT = EXPLAIN PLAN SET STATEMENT_ID='PLUS6552708' FOR select /* FIND_ME */ * from ( select * from t where id > :n ) where rownum > 1
SQL_ID = ctms62wkwp7nz
ADDRESS = 000007FF0DD90180
CHILD_ADDRESS = 000007FF0DCAAA20
CHILD_NUMBER = 2
EXPLAIN_PLAN_CURSOR = Y
--------------------------------------------------
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.10
SQL>
Other Autotrace Options
The perhaps less expected aspect comes into the picture if you attempt to use AUTOTRACE differently - there are various options and when using a particular combination AUTOTRACE doesn't really execute the statement but reports only the execution plan, so if you change the first example above from:
SET AUTOTRACE ON
to
SET AUTOTRACE TRACEONLY EXPLAIN
then have a close look at the SQL trace generated:
.
.
.
=====================
PARSING IN CURSOR #2 len=45 dep=0 uid=91 oct=3 lid=91 tim=416642144779 hv=3626603586 ad='7ff13a1c8b0' sqlid='9pj321gc2m522'
select /* FIND_ME */ * from dual where 1 = :n
END OF STMT
PARSE #2:c=0,e=64,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3752461848,tim=416642144777
CLOSE #2:c=0,e=14,dep=0,type=0,tim=416642145372
=====================
.
.
.
=====================
PARSING IN CURSOR #3 len=93 dep=0 uid=91 oct=3 lid=91 tim=416642148753 hv=2987003528 ad='7ff13cd8ea0' sqlid='fu0myxft0n3n8'
EXPLAIN PLAN SET STATEMENT_ID='PLUS6510526' FOR select /* FIND_ME */ * from dual where 1 = :n
END OF STMT
PARSE #3:c=0,e=689,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=416642148749
=====================
.
.
.
=====================
PARSING IN CURSOR #6 len=50 dep=0 uid=91 oct=3 lid=91 tim=416642233676 hv=37196885 ad='7ff138c8570' sqlid='f8cyn9w13g52p'
select /* FIND_ME */ * from dual where dummy = 'X'
END OF STMT
PARSE #6:c=0,e=116,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=272002086,tim=416642233673
CLOSE #6:c=0,e=32,dep=0,type=0,tim=416642237105
=====================
.
.
.
=====================
PARSING IN CURSOR #3 len=98 dep=0 uid=91 oct=50 lid=91 tim=416642243694 hv=390050481 ad='7ff1374bcf8' sqlid='8vvq0ncbmzcpj'
EXPLAIN PLAN SET STATEMENT_ID='PLUS6510526' FOR select /* FIND_ME */ * from dual where dummy = 'X'
END OF STMT
PARSE #3:c=0,e=1261,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=272002086,tim=416642243691
=====================
.
.
.
Can you spot the difference? SQL*Plus now only parses the SQL before actually running the EXPLAIN PLAN command.
Let's see what happens if the second example from above gets executed with the AUTOTRACE TRACEONLY EXPLAIN option:
-- Demonstrate that AUTOTRACE TRACEONLY EXPLAIN
-- can cause problems for other SQL executions
set echo on linesize 200 pagesize 0 trimspool on tab off
drop table t;
purge table t;
create table t
as
select
rownum as id
, rpad('x', 100) as filler
from
dual
connect by
level <= 1000000
;
exec dbms_stats.gather_table_stats(null, 't')
create index t_idx on t (id);
set autotrace traceonly explain
var n number
exec :n := 500000
select /* FIND_ME */ * from (
select * from t where id > :n
)
where rownum > 1;
set autotrace off
select /* FIND_ME */ * from (
select * from t where id > :n
)
where rownum > 1;
-- Now the execution plan generated by the PARSE call issued by SQL*Plus
-- will be re-used by the subsequent executions
select * from table(dbms_xplan.display_cursor(null, null));
Here's again a sample output from 11.2.0.1:
SQL> drop table t;
Table dropped.
SQL>
SQL> purge table t;
Table purged.
SQL>
SQL> create table t
2 as
3 select
4 rownum as id
5 , rpad('x', 100) as filler
6 from
7 dual
8 connect by
9 level <= 1000000
10 ;
Table created.
SQL>
SQL> exec dbms_stats.gather_table_stats(null, 't')
PL/SQL procedure successfully completed.
SQL>
SQL> create index t_idx on t (id);
Index created.
SQL>
SQL> set autotrace traceonly explain
SQL>
SQL> var n number
SQL>
SQL> exec :n := 500000
PL/SQL procedure successfully completed.
SQL>
SQL> select /* FIND_ME */ * from (
2 select * from t where id > :n
3 )
4 where rownum > 1;
Execution Plan
----------------------------------------------------------
Plan hash value: 2383791439
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 50000 | 5175K| 162 (0)| 00:00:02 |
| 1 | COUNT | | | | | |
|* 2 | FILTER | | | | | |
| 3 | TABLE ACCESS BY INDEX ROWID| T | 50000 | 5175K| 162 (0)| 00:00:02 |
|* 4 | INDEX RANGE SCAN | T_IDX | 9000 | | 23 (0)| 00:00:01 |
---------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(ROWNUM>1)
4 - access("ID">TO_NUMBER(:N))
SQL>
SQL> set autotrace off
SQL>
SQL> select /* FIND_ME */ * from (
2 select * from t where id > :n
3 )
4 where rownum > 1;
no rows selected
SQL>
SQL> -- Now the execution plan generated by the PARSE call issued by SQL*Plus
SQL> -- will be re-used by the subsequent executions
SQL> select * from table(dbms_xplan.display_cursor(null, null));
SQL_ID 8q13ghbwgsmkv, child number 0
-------------------------------------
select /* FIND_ME */ * from ( select * from t where id > :n ) where
rownum > 1
Plan hash value: 2383791439
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 162 (100)| |
| 1 | COUNT | | | | | |
|* 2 | FILTER | | | | | |
| 3 | TABLE ACCESS BY INDEX ROWID| T | 50000 | 5175K| 162 (0)| 00:00:02 |
|* 4 | INDEX RANGE SCAN | T_IDX | 9000 | | 23 (0)| 00:00:01 |
---------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(ROWNUM>1)
4 - access("ID">:N)
23 rows selected.
SQL>
So that's interesting: By using the TRACEONLY EXPLAIN option I now ended up with a potentially "wrong" execution plan that is actually eligible for sharing with other executions.
What surprised me most was the fact that I expected a bind variable type mismatch (CHAR vs. NUMBER, check the "Predicate Information" section) between the parse and the execution and therefore a re-optimization that actually peeked at the bind variables rather than re-using and sharing the existing cursor, but obviously the cursor was eligible for sharing. Very likely this is due to the fact that the parse call didn't actually bind any variables hence the mentioned mismatch wasn't possible.
Summary
So in summary I think what can be said is this:
- Don't use AUTOTRACE if you want to get the actual execution plan
- The potentially "wrong" execution plans reported by AUTOTRACE usually do not represent a threat because these are EXPLAIN PLAN cursor
- The potential threat of AUTOTRACE variants that actually execute the statement is the fact that the plan used by this actual execution is definitely eligible for sharing with other executions, but this no different from any other execution, so nothing special about AUTOTRACE here either
- There is a potential threat when using the AUTOTRACE TRACEONLY EXPLAIN option - the PARSE only but not execute behaviour could leave undesirable cursors behind that are eligible for sharing. This applies in particular to SQL statements using bind variables
Another random note that I made during the sessions attended at OOW was about the SQL*Plus AUTOTRACE feature. As you're hopefully already aware of this feature has some significant shortcomings, the most obvious being that it doesn't pull the actual execution plan from the Shared Pool after executing the statement but simply runs an EXPLAIN PLAN on the SQL text which might produce an execution plan that is different from the actual one for various reasons.
Now the claim was made that in addition to these shortcomings the plan generated by the AUTOTRACE feature will stay in the Shared Pool and is eligible for sharing, which would mean that other statement executions could be affected by a potentially bad execution plan generated via AUTOTRACE rather then getting re-optimized on their own.
Now that claim initially struck me as odd because so far I was under the impression that the shortcoming of AUTOTRACE was the fact that it simply used the EXPLAIN PLAN facility to get the execution plan details - and I don't think that any plan generated by EXPLAIN PLAN is eligible for sharing with actual statement execution. After thinking about it for a while I realized however that there are some interesting side effects possible, but it depends on how you actually use AUTOTRACE.
Using Default AUTOTRACE
So in order to see what AUTOTRACE does behind the scenes I've decided to trace AUTOTRACE. Here is what I've tried:
set echo on timing on
alter session set tracefile_identifier = 'autotrace';
alter session set sql_trace = true;
set autotrace on
var n number
exec :n := 1
select * from dual where 1 = :n;
select * from dual where dummy = 'X';
And that's a snippet from the corresponding SQL trace file:
.
.
.
=====================
PARSING IN CURSOR #7 len=31 dep=0 uid=91 oct=3 lid=91 tim=651497870527 hv=868568466 ad='7ff0ce23638' sqlid='b9j0230twamck'
select * from dual where 1 = :n
END OF STMT
PARSE #7:c=0,e=460,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=651497870525
=====================
.
.
.
EXEC #7:c=0,e=1306,p=1,cr=3,cu=0,mis=0,r=0,dep=0,og=1,plh=3752461848,tim=651497871918
FETCH #7:c=0,e=654,p=2,cr=3,cu=0,mis=0,r=1,dep=0,og=1,plh=3752461848,tim=651497872660
STAT #7 id=1 cnt=1 pid=0 pos=1 obj=0 op='FILTER (cr=3 pr=2 pw=0 time=0 us)'
STAT #7 id=2 cnt=1 pid=1 pos=1 obj=116 op='TABLE ACCESS FULL DUAL (cr=3 pr=2 pw=0 time=0 us cost=2 size=2 card=1)'
FETCH #7:c=0,e=3,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=3752461848,tim=651497873015
CLOSE #7:c=0,e=16,dep=0,type=0,tim=651497876511
.
.
.
=====================
PARSING IN CURSOR #9 len=79 dep=0 uid=91 oct=3 lid=91 tim=651497880846 hv=3377064296 ad='7ff0ce196a8' sqlid='1tfgxbv4nmub8'
EXPLAIN PLAN SET STATEMENT_ID='PLUS6499083' FOR select * from dual where 1 = :n
END OF STMT
PARSE #9:c=0,e=583,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=651497880843
=====================
.
.
.
=====================
PARSING IN CURSOR #2 len=74 dep=0 uid=91 oct=3 lid=91 tim=651497888595 hv=920998108 ad='7ff0cdd8b00' sqlid='3s1hh8cvfan6w'
SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE', :1))
END OF STMT
PARSE #2:c=0,e=264,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=651497888593
=====================
.
.
.
PARSING IN CURSOR #7 len=36 dep=0 uid=91 oct=3 lid=91 tim=651498044006 hv=3267611628 ad='7ff0cdbd0f8' sqlid='4k6g7vr1c7kzc'
select * from dual where dummy = 'X'
END OF STMT
PARSE #7:c=0,e=1071,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=272002086,tim=651498044003
EXEC #7:c=0,e=34,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=272002086,tim=651498044138
FETCH #7:c=0,e=60,p=0,cr=3,cu=0,mis=0,r=1,dep=0,og=1,plh=272002086,tim=651498044289
STAT #7 id=1 cnt=1 pid=0 pos=1 obj=116 op='TABLE ACCESS FULL DUAL (cr=3 pr=0 pw=0 time=0 us cost=2 size=2 card=1)'
FETCH #7:c=0,e=2,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=272002086,tim=651498044616
CLOSE #7:c=0,e=28,dep=0,type=0,tim=651498062083
.
.
.
=====================
PARSING IN CURSOR #2 len=84 dep=0 uid=91 oct=50 lid=91 tim=651498073656 hv=290419607 ad='7ff0cdb8a28' sqlid='5jx46tw8nywwr'
EXPLAIN PLAN SET STATEMENT_ID='PLUS6499083' FOR select * from dual where dummy = 'X'
END OF STMT
PARSE #2:c=0,e=1295,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=272002086,tim=651498073653
=====================
.
.
.
=====================
PARSING IN CURSOR #9 len=74 dep=0 uid=91 oct=3 lid=91 tim=651498076015 hv=920998108 ad='7ff0cdd8b00' sqlid='3s1hh8cvfan6w'
SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE', :1))
END OF STMT
PARSE #9:c=0,e=254,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=651498076013
=====================
.
.
.
So that looks pretty much like the expected behaviour I've mentioned above - AUTOTRACE executes the statements and afterwards runs an EXPLAIN PLAN to show the execution plan.
As a side note it's interesting that the SQL trace doesn't contain the queries used to gather the delta of the session statistics. The reason is simple: They are not issued by this session. SQL*Plus establishes temporarily a second session for that purpose, using one of the modes provided by the OCI allowing to create a second session on the same connection / process. You can tell this by looking at the corresponding V$SESSION.PADDR resp. the entry in V$PROCESS: For both sessions the same process entry will be used (dedicated server model). By the way I've adopted the same approach for SQLTools++, the GUI that I maintain, for all activities that potentially could interfere with the main session, like collecting session statistics delta or calling DBMS_XPLAN.DISPLAY_CURSOR.
So when using AUTOTRACE in this way the only potential threat comes from the actual execution of the statement - but this is no different from executing a statement in any other way. Of course you'll appreciate that using an odd bind value in the execution as part of the AUTOTRACE activity could theoretically lead to issues with the shared usage of such a cursor afterwards - again this is nothing that is special to AUTOTRACE.
The potentially "wrong" execution plan that can be reported via the EXPLAIN PLAN cannot cause problems for other cursors, simply because it is generated via EXPLAIN PLAN. To make this point clear, here is another script that demonstrates:
- How AUTOTRACE can lie
- How EXPLAIN PLAN cursors are unshared by default
-- Demonstrate that AUTOTRACE can lie
set echo on linesize 200 pagesize 0 trimspool on tab off
drop table t;
purge table t;
create table t
as
select
rownum as id
, rpad('x', 100) as filler
from
dual
connect by
level <= 1000000
;
exec dbms_stats.gather_table_stats(null, 't')
create index t_idx on t (id);
-- Compare the execution plan
-- reported by AUTOTRACE
-- to the one reported by DBMS_XPLAN.DISPLAY_CURSOR
set autotrace on
var n number
exec :n := 500000
select /* FIND_ME */ * from (
select * from t where id > :n
)
where rownum > 1;
set autotrace off
select /* FIND_ME */ * from (
select * from t where id > :n
)
where rownum > 1;
select * from table(dbms_xplan.display_cursor(null, null));
-- Demonstrate that EXPLAIN PLAN cursors get special treatment
-- They are unshared by default
set echo off timing off feedback off long 1000000 longchunksize 1000000
spool %TEMP%\explain_plan_example.sql
select * from (
select
sql_fulltext
from
v$sqlstats
where
sql_text like 'EXPLAIN PLAN%/* FIND_ME */%rownum > 1%'
and sql_text not like '%v$sql%'
order by
last_active_time desc
)
where
rownum <= 1
;
spool off
-- Each execution of the same parent EXPLAIN PLAN cursor
-- leads to a new child cursor
set echo on feedback on timing on pagesize 999
@%TEMP%\explain_plan_example
/
/
column sql_id new_value sql_id
select * from (
select
sql_id
from
v$sqlstats
where
sql_text like 'EXPLAIN PLAN%/* FIND_ME */%rownum > 1%'
and sql_text not like '%v$sql%'
order by
last_active_time desc
)
where
rownum <= 1
;
select
sql_id
, child_number
, explain_plan_cursor
from
v$sql_shared_cursor
where
sql_id = '&sql_id';
set serveroutput on
@sql_shared_cursor &sql_id
So if you run this script you'll see an example where AUTOTRACE gets it wrong because the plan generated via EXPLAIN PLAN is different from the actual plan used. Furthermore the plan generated via EXPLAIN PLAN can only match other EXPLAIN PLAN cursors, and on top these are then unshared by default - so no threat to any other SQL issued possible.
Here's a sample output I got from 11.2.0.1:
SQL> -- Demonstrate that AUTOTRACE can lie
SQL> set echo on linesize 200 pagesize 0 trimspool on tab off
SQL>
SQL> drop table t;
Table dropped.
Elapsed: 00:00:00.03
SQL>
SQL> purge table t;
Table purged.
Elapsed: 00:00:00.04
SQL>
SQL> create table t
2 as
3 select
4 rownum as id
5 , rpad('x', 100) as filler
6 from
7 dual
8 connect by
9 level <= 1000000
10 ;
Table created.
Elapsed: 00:00:02.38
SQL>
SQL> exec dbms_stats.gather_table_stats(null, 't')
PL/SQL procedure successfully completed.
Elapsed: 00:00:02.40
SQL>
SQL> create index t_idx on t (id);
Index created.
Elapsed: 00:00:01.63
SQL>
SQL> -- Compare the execution plan
SQL> -- reported by AUTOTRACE
SQL> -- to the one reported by DBMS_XPLAN.DISPLAY_CURSOR
SQL> set autotrace on
SQL>
SQL> var n number
SQL>
SQL> exec :n := 500000
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.01
SQL>
SQL> select /* FIND_ME */ * from (
2 select * from t where id > :n
3 )
4 where rownum > 1;
no rows selected
Elapsed: 00:00:01.51
Execution Plan
----------------------------------------------------------
Plan hash value: 2383791439
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 50000 | 5175K| 162 (0)| 00:00:02 |
| 1 | COUNT | | | | | |
|* 2 | FILTER | | | | | |
| 3 | TABLE ACCESS BY INDEX ROWID| T | 50000 | 5175K| 162 (0)| 00:00:02 |
|* 4 | INDEX RANGE SCAN | T_IDX | 9000 | | 23 (0)| 00:00:01 |
---------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(ROWNUM>1)
4 - access("ID">TO_NUMBER(:N))
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
15390 consistent gets
15385 physical reads
0 redo size
304 bytes sent via SQL*Net to client
349 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
SQL>
SQL> set autotrace off
SQL>
SQL> select /* FIND_ME */ * from (
2 select * from t where id > :n
3 )
4 where rownum > 1;
no rows selected
Elapsed: 00:00:00.98
SQL>
SQL> select * from table(dbms_xplan.display_cursor(null, null));
SQL_ID 8q13ghbwgsmkv, child number 0
-------------------------------------
select /* FIND_ME */ * from ( select * from t where id > :n ) where
rownum > 1
Plan hash value: 4220795399
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 4204 (100)| |
| 1 | COUNT | | | | | |
|* 2 | FILTER | | | | | |
|* 3 | TABLE ACCESS FULL| T | 500K| 50M| 4204 (1)| 00:00:51 |
----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(ROWNUM>1)
3 - filter("ID">:N)
22 rows selected.
Elapsed: 00:00:00.12
SQL>
SQL> -- Demonstrate that EXPLAIN PLAN cursors get special treatment
SQL> -- They are unshared by default
SQL> set echo off timing off feedback off long 1000000 longchunksize 1000000
EXPLAIN PLAN SET STATEMENT_ID='PLUS6552708' FOR select /* FIND_ME */ * from (
select * from t where id > :n
)
where rownum > 1
SQL>
SQL> @%TEMP%\explain_plan_example
SQL> EXPLAIN PLAN SET STATEMENT_ID='PLUS6552708' FOR select /* FIND_ME */ * from (
2 select * from t where id > :n
3 )
4 where rownum > 1
5
SQL> /
Explained.
Elapsed: 00:00:00.00
SQL>
SQL> /
Explained.
Elapsed: 00:00:00.00
SQL>
SQL> column sql_id new_value sql_id
SQL>
SQL> select * from (
2 select
3 sql_id
4 from
5 v$sqlstats
6 where
7 sql_text like 'EXPLAIN PLAN%/* FIND_ME */%rownum > 1%'
8 and sql_text not like '%v$sql%'
9 order by
10 last_active_time desc
11 )
12 where
13 rownum <= 1
14 ;
SQL_ID
-------------
ctms62wkwp7nz
1 row selected.
Elapsed: 00:00:00.03
SQL>
SQL> select
2 sql_id
3 , child_number
4 , explain_plan_cursor
5 from
6 v$sql_shared_cursor
7 where
8 sql_id = '&sql_id';
SQL_ID CHILD_NUMBER E
------------- ------------ -
ctms62wkwp7nz 0 N
ctms62wkwp7nz 1 Y
ctms62wkwp7nz 2 Y
3 rows selected.
Elapsed: 00:00:00.03
SQL>
SQL> set serveroutput on
SQL>
SQL> @sql_shared_cursor &sql_id
SQL> declare
2 c number;
3 col_cnt number;
4 col_rec dbms_sql.desc_tab;
5 col_value varchar2(4000);
6 ret_val number;
7 begin
8 c := dbms_sql.open_cursor;
9 dbms_sql.parse(c,
10 'select q.sql_text, s.*
11 from v$sql_shared_cursor s, v$sql q
12 where s.sql_id = q.sql_id
13 and s.child_number = q.child_number
14 and q.sql_id = ''&1''',
15 dbms_sql.native);
16 dbms_sql.describe_columns(c, col_cnt, col_rec);
17
18 for idx in 1 .. col_cnt loop
19 dbms_sql.define_column(c, idx, col_value, 4000);
20 end loop;
21
22 ret_val := dbms_sql.execute(c);
23
24 while(dbms_sql.fetch_rows(c) > 0) loop
25 for idx in 1 .. col_cnt loop
26 dbms_sql.column_value(c, idx, col_value);
27 if col_rec(idx).col_name in ('SQL_ID', 'ADDRESS', 'CHILD_ADDRESS',
28 'CHILD_NUMBER', 'SQL_TEXT') then
29 dbms_output.put_line(rpad(col_rec(idx).col_name, 30) ||
30 ' = ' || col_value);
31 elsif col_value = 'Y' then
32 dbms_output.put_line(rpad(col_rec(idx).col_name, 30) ||
33 ' = ' || col_value);
34 end if;
35 end loop;
36 dbms_output.put_line('--------------------------------------------------');
37 end loop;
38
39 dbms_sql.close_cursor(c);
40 end;
41 /
SQL_TEXT = EXPLAIN PLAN SET STATEMENT_ID='PLUS6552708' FOR select /* FIND_ME */ * from ( select * from t where id > :n ) where rownum > 1
SQL_ID = ctms62wkwp7nz
ADDRESS = 000007FF0DD90180
CHILD_ADDRESS = 000007FF0DD87E70
CHILD_NUMBER = 0
--------------------------------------------------
SQL_TEXT = EXPLAIN PLAN SET STATEMENT_ID='PLUS6552708' FOR select /* FIND_ME */ * from ( select * from t where id > :n ) where rownum > 1
SQL_ID = ctms62wkwp7nz
ADDRESS = 000007FF0DD90180
CHILD_ADDRESS = 000007FF0DCD0D10
CHILD_NUMBER = 1
EXPLAIN_PLAN_CURSOR = Y
--------------------------------------------------
SQL_TEXT = EXPLAIN PLAN SET STATEMENT_ID='PLUS6552708' FOR select /* FIND_ME */ * from ( select * from t where id > :n ) where rownum > 1
SQL_ID = ctms62wkwp7nz
ADDRESS = 000007FF0DD90180
CHILD_ADDRESS = 000007FF0DCAAA20
CHILD_NUMBER = 2
EXPLAIN_PLAN_CURSOR = Y
--------------------------------------------------
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.10
SQL>
Other Autotrace Options
The perhaps less expected aspect comes into the picture if you attempt to use AUTOTRACE differently - there are various options and when using a particular combination AUTOTRACE doesn't really execute the statement but reports only the execution plan, so if you change the first example above from:
SET AUTOTRACE ON
to
SET AUTOTRACE TRACEONLY EXPLAIN
then have a close look at the SQL trace generated:
.
.
.
=====================
PARSING IN CURSOR #2 len=45 dep=0 uid=91 oct=3 lid=91 tim=416642144779 hv=3626603586 ad='7ff13a1c8b0' sqlid='9pj321gc2m522'
select /* FIND_ME */ * from dual where 1 = :n
END OF STMT
PARSE #2:c=0,e=64,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3752461848,tim=416642144777
CLOSE #2:c=0,e=14,dep=0,type=0,tim=416642145372
=====================
.
.
.
=====================
PARSING IN CURSOR #3 len=93 dep=0 uid=91 oct=3 lid=91 tim=416642148753 hv=2987003528 ad='7ff13cd8ea0' sqlid='fu0myxft0n3n8'
EXPLAIN PLAN SET STATEMENT_ID='PLUS6510526' FOR select /* FIND_ME */ * from dual where 1 = :n
END OF STMT
PARSE #3:c=0,e=689,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=416642148749
=====================
.
.
.
=====================
PARSING IN CURSOR #6 len=50 dep=0 uid=91 oct=3 lid=91 tim=416642233676 hv=37196885 ad='7ff138c8570' sqlid='f8cyn9w13g52p'
select /* FIND_ME */ * from dual where dummy = 'X'
END OF STMT
PARSE #6:c=0,e=116,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=272002086,tim=416642233673
CLOSE #6:c=0,e=32,dep=0,type=0,tim=416642237105
=====================
.
.
.
=====================
PARSING IN CURSOR #3 len=98 dep=0 uid=91 oct=50 lid=91 tim=416642243694 hv=390050481 ad='7ff1374bcf8' sqlid='8vvq0ncbmzcpj'
EXPLAIN PLAN SET STATEMENT_ID='PLUS6510526' FOR select /* FIND_ME */ * from dual where dummy = 'X'
END OF STMT
PARSE #3:c=0,e=1261,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=272002086,tim=416642243691
=====================
.
.
.
Can you spot the difference? SQL*Plus now only parses the SQL before actually running the EXPLAIN PLAN command.
Let's see what happens if the second example from above gets executed with the AUTOTRACE TRACEONLY EXPLAIN option:
-- Demonstrate that AUTOTRACE TRACEONLY EXPLAIN
-- can cause problems for other SQL executions
set echo on linesize 200 pagesize 0 trimspool on tab off
drop table t;
purge table t;
create table t
as
select
rownum as id
, rpad('x', 100) as filler
from
dual
connect by
level <= 1000000
;
exec dbms_stats.gather_table_stats(null, 't')
create index t_idx on t (id);
set autotrace traceonly explain
var n number
exec :n := 500000
select /* FIND_ME */ * from (
select * from t where id > :n
)
where rownum > 1;
set autotrace off
select /* FIND_ME */ * from (
select * from t where id > :n
)
where rownum > 1;
-- Now the execution plan generated by the PARSE call issued by SQL*Plus
-- will be re-used by the subsequent executions
select * from table(dbms_xplan.display_cursor(null, null));
Here's again a sample output from 11.2.0.1:
SQL> drop table t;
Table dropped.
SQL>
SQL> purge table t;
Table purged.
SQL>
SQL> create table t
2 as
3 select
4 rownum as id
5 , rpad('x', 100) as filler
6 from
7 dual
8 connect by
9 level <= 1000000
10 ;
Table created.
SQL>
SQL> exec dbms_stats.gather_table_stats(null, 't')
PL/SQL procedure successfully completed.
SQL>
SQL> create index t_idx on t (id);
Index created.
SQL>
SQL> set autotrace traceonly explain
SQL>
SQL> var n number
SQL>
SQL> exec :n := 500000
PL/SQL procedure successfully completed.
SQL>
SQL> select /* FIND_ME */ * from (
2 select * from t where id > :n
3 )
4 where rownum > 1;
Execution Plan
----------------------------------------------------------
Plan hash value: 2383791439
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 50000 | 5175K| 162 (0)| 00:00:02 |
| 1 | COUNT | | | | | |
|* 2 | FILTER | | | | | |
| 3 | TABLE ACCESS BY INDEX ROWID| T | 50000 | 5175K| 162 (0)| 00:00:02 |
|* 4 | INDEX RANGE SCAN | T_IDX | 9000 | | 23 (0)| 00:00:01 |
---------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(ROWNUM>1)
4 - access("ID">TO_NUMBER(:N))
SQL>
SQL> set autotrace off
SQL>
SQL> select /* FIND_ME */ * from (
2 select * from t where id > :n
3 )
4 where rownum > 1;
no rows selected
SQL>
SQL> -- Now the execution plan generated by the PARSE call issued by SQL*Plus
SQL> -- will be re-used by the subsequent executions
SQL> select * from table(dbms_xplan.display_cursor(null, null));
SQL_ID 8q13ghbwgsmkv, child number 0
-------------------------------------
select /* FIND_ME */ * from ( select * from t where id > :n ) where
rownum > 1
Plan hash value: 2383791439
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 162 (100)| |
| 1 | COUNT | | | | | |
|* 2 | FILTER | | | | | |
| 3 | TABLE ACCESS BY INDEX ROWID| T | 50000 | 5175K| 162 (0)| 00:00:02 |
|* 4 | INDEX RANGE SCAN | T_IDX | 9000 | | 23 (0)| 00:00:01 |
---------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(ROWNUM>1)
4 - access("ID">:N)
23 rows selected.
SQL>
So that's interesting: By using the TRACEONLY EXPLAIN option I now ended up with a potentially "wrong" execution plan that is actually eligible for sharing with other executions.
What surprised me most was the fact that I expected a bind variable type mismatch (CHAR vs. NUMBER, check the "Predicate Information" section) between the parse and the execution and therefore a re-optimization that actually peeked at the bind variables rather than re-using and sharing the existing cursor, but obviously the cursor was eligible for sharing. Very likely this is due to the fact that the parse call didn't actually bind any variables hence the mentioned mismatch wasn't possible.
Summary
So in summary I think what can be said is this:
- Don't use AUTOTRACE if you want to get the actual execution plan
- The potentially "wrong" execution plans reported by AUTOTRACE usually do not represent a threat because these are EXPLAIN PLAN cursor
- The potential threat of AUTOTRACE variants that actually execute the statement is the fact that the plan used by this actual execution is definitely eligible for sharing with other executions, but this no different from any other execution, so nothing special about AUTOTRACE here either
- There is a potential threat when using the AUTOTRACE TRACEONLY EXPLAIN option - the PARSE only but not execute behaviour could leave undesirable cursors behind that are eligible for sharing. This applies in particular to SQL statements using bind variables
Labels:
10gR2,
11g,
AUTOTRACE,
CBO,
Cursor Sharing,
DBMS_XPLAN,
Execution Plan,
EXPLAIN PLAN,
Plan stability,
SQL*Plus
Sunday, January 15, 2012
Incremental Partition Statistics Review
Introduction
Here is a summary of the findings while evaluating Incremental Partition Statistics that have been introduced in Oracle 11g.
The most important point to understand is that Incremental Partition Statistics are not "cost-free", so anyone who is telling you that you can gather statistics on the lowest level (partition or sub-partition in case of composite partitioning) without any noticeable overhead in comparison to non-incremental statistics (on the lowest level) is not telling you the truth.
Although this might be obvious I've already personally heard someone making such claims so it's probably worth to mention.
In principle you need to test on your individual system whether the overhead that is added to each statistics update on the lowest level outweighs the overhead of actually gathering statistics on higher levels, of course in particular on global level.
This might also depend on your strategy how and how often you used to gather statistics so far.
The overhead introduced by Incremental Partition Statistics can be significant, in terms of both runtime and data volume. You can expect the SYSAUX tablespace to grow by several GBs (for larger databases in the TB range easily in the tenth of GBs) depending on the number of partitions, number of columns and distinct values per column.
To give you an idea here are some example figures from the evaluation:
Table 1: 4 million total rows, 1 GB total size, 6 range partitions, 155 columns
Table 2: 200 million total rows, 53 GB total size, 629 range-list subpartitions, 104 columns
For Table 1 Incremental stats maintained 700,000 rows in SYS.WRI$_OPTSTAT_SYNOPSIS$. For Table 2 it was 3,400,000 rows. In total for these two tables approx. 4.1 million rows and 170 MB had to be maintained in the SYS.WRI$_OPTSTAT_SYNOPSIS$ tables.
When I first saw this significant data volume generated for the synopsis meta data I was pretty sure that processing that amount of data will clearly cause some significant overhead, too.
And that is exactly what happens - for example a recursive DELETE statement on the SYS.WRI$_OPTSTAT_SYNOPSIS$ table took about 10 secs out of 16 secs total runtime of statistics gathering for a rather small partition of above partitioned table. Here are some more figures from the test runs:
Timing comparison on an Exadata X2-8
(tests were performed as only user on the system)
Exadata X2-8 was 11.2.0.2 BP6, for comparison purposes a full rack V2 running 11.2.0.1.2 BP6(?) was used
The following relevant parameters were used in the call to DBMS_STATS:
dbms_stats.gather_table_stats(
ownname => ...
, tabname => ...,
, partname=>'<PARTNAME>'
, granularity=>'AUTO'
, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE
, method_opt => 'FOR ALL COLUMNS SIZE 1'
, cascade => true
);
were <PARTNAME> is the name of the partition that was modified. Basically it was a simulation of a per-partition data load where the data is loaded into a separate segment and afterwards an exchange (sub)partition is performed with the main table.
After exchange partition the statistics were refreshed on the main table using above call.
Modification of a single partition of above Table 1, approx. 500,000 rows resp. 110 MB of data in this single partition.
INCREMENTAL => FALSE: 7-13 seconds
INCREMENTAL => TRUE : 16 seconds (the majority of time is spent on a DELETE from
SYS.WRI$_OPTSTAT_SYNOPSIS$)
Modification of a single subpartition of above Table 2, approx. 300,000 rows resp. 75 MB of data in this single subpartition.
INCREMENTAL => FALSE: 67 seconds
INCREMENTAL => TRUE : 11.2.0.2 390 (!) seconds
11.2.0.1 30 seconds
11.2.0.2 with fix_control=8917507:OFF: 70 seconds
So the overhead ratio depends largely on the time it actually takes to gather the statistics - for rather small partitions the meta data maintenance overhead will be enormous.
On an Exadata X2-8 11.2.0.2 using the non-incremental approach of gathering lowest level partition statistics plus partition plus global statistics for the 53GB table (629 range-list subpartitions, 104 columns), took almost the same time as it took the incremental statistics to gather statistics only on lowest level plus the meta data maintenance / aggregation overhead.
Of course you'll appreciate that the activity performed for those two operations is vastly different - the conventional statistics approach needs to throw all processing power of the X2-8 at this problem and any concurrent activity will have to share the CPU and I/O demand of that operation, while the mostly meta data based incremental statistics only allocate a single CPU and some I/O during the processing, leaving most of the I/O and CPU resources available for other concurrent tasks.
On a larger data volume and/or slower systems the Incremental Partition Statistics will probably easily outperform the non-incremental approach.
Furthermore it should be mentioned that the tests used the "FOR ALL COLUMNS SIZE 1" METHOD_OPT option that doesn't generate any histograms. The INCREMENTAL partition statistics feature is however capable of deriving upper level histograms from lower levels of statistics with histograms in place. This can mean a significant saving in processing time if histograms need to be maintained on upper levels since each histogram adds another pass to the DBMS_STATS processing. In fact the histograms generating by INCREMENTAL partition statistics might be even of better quality than those generated via explicit gathering because by default a quite low sample size is used for histogram generation in order to keep the overhead as small as possible.
Note that according to the description the APPROX_GLOBAL AND PARTITION granularity also supports aggregation of histograms, but I haven't looked in detail into this option yet.
As usual you'll have to test it yourself on your system and hardware, but the main point is that it doesn't come for free - it requires both significant space and runtime.
One idea that might make sense is limiting the column statistics to those columns that you are sure you'll use in predicates / group bys / order bys. Any columns that are only used for display purposes could be left without any column statistics. Depending on your data model this might allow to save some volume and processing time, but it needs to be maintained on a per table basis rather than a one size fits all approach.
Further Findings
Here are some further findings that I found relevant:
- INCREMENTAL => TRUE means that ESTIMATE_PERCENT will be ignored - the new approximate NDV algorithm that reads all data but doesn't add the grouping overhead of a conventional aggregation method is mandatory for the new feature. This means in case of very large data sets to analyze that former approaches using very low sample sizes will now take significantly longer (approx. the time it takes to sample 10% of the data with the former approach), however with the benefit of producing almost 100% accurate statistics. There is currently no way around this - if you want to use INCREMENTAL you have to process 100% of the data using the new NDV algorithm. Note that this applies to 11.2 - I haven't tested this on 11.1
- INCREMENTAL doesn't maintain any synopses for indexes, so in order to obtain higher level index statistics for partitioned indexes it always includes a gather global index statistics. However it resorts to a sample size and doesn't analyze the whole index. For very large indexes and/or a very large number of indexes the overhead can still be significant, so this is something to keep in mind: Even with incremental partition statistics there is a component that is dependent on the global volume, in this case the index volume
- In order to effectively use INCREMENTAL the meta data for the synopses needs to be created initially for all partitions, even for those where the data doesn't change any longer. So for very large (historic) data volumes this initial synopsis generation can represent a challenge that needs to be planned and considered how it will be approached. You need to be careful how incremental will be enabled: If you simply switch it on and use GRANULARITY=>AUTO as outlined in the manuals the next gather statistics call on the table will gather the meta data for all (sub-)partitions of the table - this might take very, very long. It might be more sensible to gather statistics with a different GRANULARITY. This still adds the meta data maintenance overhead but you are in control of which partitions are going to be analyzed, allowing for a step-wise approach.
- In 11.2.0.2 the underlying internal table structure has been changed significantly. In particular the table SYS.WRI$_OPTSTAT_SYNOPSIS$ has been changed from unpartitioned to composite partitioned. Interestingly it doesn't have a single index in 11.2.0.2 - it looks like having it composite-partitioned seemed to be sufficient to the developers. The change very likely has been introduced due to bug 9038395 that addresses the problem that deleting the statistics for a single table used to be dependent on the total amount of tables using incremental statistics. So that problem should be addressed now, but it still doesn't mean that the meta data maintenance overhead is now negligible
- There is a bug in 11.2.0.2 that basically rendered the incremental partition statistics unusable with composite partitioned tables used at that client. A particular recursive SQL statement got executed multiple thousand times. This means it took up to several minutes to complete the meta data operation (see above timings). This is tracked with bug 12833442. The behaviour can be changed by using fix control 8917507 - which helped in this case to arrive at reasonable runtimes although 11.2.0.1 was still twice as fast.
- INCREMENTAL => TRUE doesn't work with locked statistics, you'll always end up with an ORA-20005 Object Statistics are locked even when specifying the FORCE => TRUE option. This is tracked with bug 12369250 (according to MyOracleSupport fixed in the 11.2.0.3 patch set)
Footnote
All of the above applies to 11.2.0.2 resp. 11.2.0.1. I haven't had the chance yet to repeat those tests on 11.2.0.3.
Here is a summary of the findings while evaluating Incremental Partition Statistics that have been introduced in Oracle 11g.
The most important point to understand is that Incremental Partition Statistics are not "cost-free", so anyone who is telling you that you can gather statistics on the lowest level (partition or sub-partition in case of composite partitioning) without any noticeable overhead in comparison to non-incremental statistics (on the lowest level) is not telling you the truth.
Although this might be obvious I've already personally heard someone making such claims so it's probably worth to mention.
In principle you need to test on your individual system whether the overhead that is added to each statistics update on the lowest level outweighs the overhead of actually gathering statistics on higher levels, of course in particular on global level.
This might also depend on your strategy how and how often you used to gather statistics so far.
The overhead introduced by Incremental Partition Statistics can be significant, in terms of both runtime and data volume. You can expect the SYSAUX tablespace to grow by several GBs (for larger databases in the TB range easily in the tenth of GBs) depending on the number of partitions, number of columns and distinct values per column.
To give you an idea here are some example figures from the evaluation:
Table 1: 4 million total rows, 1 GB total size, 6 range partitions, 155 columns
Table 2: 200 million total rows, 53 GB total size, 629 range-list subpartitions, 104 columns
For Table 1 Incremental stats maintained 700,000 rows in SYS.WRI$_OPTSTAT_SYNOPSIS$. For Table 2 it was 3,400,000 rows. In total for these two tables approx. 4.1 million rows and 170 MB had to be maintained in the SYS.WRI$_OPTSTAT_SYNOPSIS$ tables.
When I first saw this significant data volume generated for the synopsis meta data I was pretty sure that processing that amount of data will clearly cause some significant overhead, too.
And that is exactly what happens - for example a recursive DELETE statement on the SYS.WRI$_OPTSTAT_SYNOPSIS$ table took about 10 secs out of 16 secs total runtime of statistics gathering for a rather small partition of above partitioned table. Here are some more figures from the test runs:
Timing comparison on an Exadata X2-8
(tests were performed as only user on the system)
Exadata X2-8 was 11.2.0.2 BP6, for comparison purposes a full rack V2 running 11.2.0.1.2 BP6(?) was used
The following relevant parameters were used in the call to DBMS_STATS:
dbms_stats.gather_table_stats(
ownname => ...
, tabname => ...,
, partname=>'<PARTNAME>'
, granularity=>'AUTO'
, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE
, method_opt => 'FOR ALL COLUMNS SIZE 1'
, cascade => true
);
were <PARTNAME> is the name of the partition that was modified. Basically it was a simulation of a per-partition data load where the data is loaded into a separate segment and afterwards an exchange (sub)partition is performed with the main table.
After exchange partition the statistics were refreshed on the main table using above call.
Modification of a single partition of above Table 1, approx. 500,000 rows resp. 110 MB of data in this single partition.
INCREMENTAL => FALSE: 7-13 seconds
INCREMENTAL => TRUE : 16 seconds (the majority of time is spent on a DELETE from
SYS.WRI$_OPTSTAT_SYNOPSIS$)
Modification of a single subpartition of above Table 2, approx. 300,000 rows resp. 75 MB of data in this single subpartition.
INCREMENTAL => FALSE: 67 seconds
INCREMENTAL => TRUE : 11.2.0.2 390 (!) seconds
11.2.0.1 30 seconds
11.2.0.2 with fix_control=8917507:OFF: 70 seconds
So the overhead ratio depends largely on the time it actually takes to gather the statistics - for rather small partitions the meta data maintenance overhead will be enormous.
On an Exadata X2-8 11.2.0.2 using the non-incremental approach of gathering lowest level partition statistics plus partition plus global statistics for the 53GB table (629 range-list subpartitions, 104 columns), took almost the same time as it took the incremental statistics to gather statistics only on lowest level plus the meta data maintenance / aggregation overhead.
Of course you'll appreciate that the activity performed for those two operations is vastly different - the conventional statistics approach needs to throw all processing power of the X2-8 at this problem and any concurrent activity will have to share the CPU and I/O demand of that operation, while the mostly meta data based incremental statistics only allocate a single CPU and some I/O during the processing, leaving most of the I/O and CPU resources available for other concurrent tasks.
On a larger data volume and/or slower systems the Incremental Partition Statistics will probably easily outperform the non-incremental approach.
Furthermore it should be mentioned that the tests used the "FOR ALL COLUMNS SIZE 1" METHOD_OPT option that doesn't generate any histograms. The INCREMENTAL partition statistics feature is however capable of deriving upper level histograms from lower levels of statistics with histograms in place. This can mean a significant saving in processing time if histograms need to be maintained on upper levels since each histogram adds another pass to the DBMS_STATS processing. In fact the histograms generating by INCREMENTAL partition statistics might be even of better quality than those generated via explicit gathering because by default a quite low sample size is used for histogram generation in order to keep the overhead as small as possible.
Note that according to the description the APPROX_GLOBAL AND PARTITION granularity also supports aggregation of histograms, but I haven't looked in detail into this option yet.
As usual you'll have to test it yourself on your system and hardware, but the main point is that it doesn't come for free - it requires both significant space and runtime.
One idea that might make sense is limiting the column statistics to those columns that you are sure you'll use in predicates / group bys / order bys. Any columns that are only used for display purposes could be left without any column statistics. Depending on your data model this might allow to save some volume and processing time, but it needs to be maintained on a per table basis rather than a one size fits all approach.
Further Findings
Here are some further findings that I found relevant:
- INCREMENTAL => TRUE means that ESTIMATE_PERCENT will be ignored - the new approximate NDV algorithm that reads all data but doesn't add the grouping overhead of a conventional aggregation method is mandatory for the new feature. This means in case of very large data sets to analyze that former approaches using very low sample sizes will now take significantly longer (approx. the time it takes to sample 10% of the data with the former approach), however with the benefit of producing almost 100% accurate statistics. There is currently no way around this - if you want to use INCREMENTAL you have to process 100% of the data using the new NDV algorithm. Note that this applies to 11.2 - I haven't tested this on 11.1
- INCREMENTAL doesn't maintain any synopses for indexes, so in order to obtain higher level index statistics for partitioned indexes it always includes a gather global index statistics. However it resorts to a sample size and doesn't analyze the whole index. For very large indexes and/or a very large number of indexes the overhead can still be significant, so this is something to keep in mind: Even with incremental partition statistics there is a component that is dependent on the global volume, in this case the index volume
- In order to effectively use INCREMENTAL the meta data for the synopses needs to be created initially for all partitions, even for those where the data doesn't change any longer. So for very large (historic) data volumes this initial synopsis generation can represent a challenge that needs to be planned and considered how it will be approached. You need to be careful how incremental will be enabled: If you simply switch it on and use GRANULARITY=>AUTO as outlined in the manuals the next gather statistics call on the table will gather the meta data for all (sub-)partitions of the table - this might take very, very long. It might be more sensible to gather statistics with a different GRANULARITY. This still adds the meta data maintenance overhead but you are in control of which partitions are going to be analyzed, allowing for a step-wise approach.
- In 11.2.0.2 the underlying internal table structure has been changed significantly. In particular the table SYS.WRI$_OPTSTAT_SYNOPSIS$ has been changed from unpartitioned to composite partitioned. Interestingly it doesn't have a single index in 11.2.0.2 - it looks like having it composite-partitioned seemed to be sufficient to the developers. The change very likely has been introduced due to bug 9038395 that addresses the problem that deleting the statistics for a single table used to be dependent on the total amount of tables using incremental statistics. So that problem should be addressed now, but it still doesn't mean that the meta data maintenance overhead is now negligible
- There is a bug in 11.2.0.2 that basically rendered the incremental partition statistics unusable with composite partitioned tables used at that client. A particular recursive SQL statement got executed multiple thousand times. This means it took up to several minutes to complete the meta data operation (see above timings). This is tracked with bug 12833442. The behaviour can be changed by using fix control 8917507 - which helped in this case to arrive at reasonable runtimes although 11.2.0.1 was still twice as fast.
- INCREMENTAL => TRUE doesn't work with locked statistics, you'll always end up with an ORA-20005 Object Statistics are locked even when specifying the FORCE => TRUE option. This is tracked with bug 12369250 (according to MyOracleSupport fixed in the 11.2.0.3 patch set)
Footnote
All of the above applies to 11.2.0.2 resp. 11.2.0.1. I haven't had the chance yet to repeat those tests on 11.2.0.3.
Labels:
11.2.0.1,
11.2.0.2,
11g,
CBO,
DBMS_STATS,
incremental,
Partitioning,
statistics
Monday, January 9, 2012
Dynamic Sampling On Multiple Partitions - Bugs
In a recent OTN thread I've been reminded of two facts about Dynamic Sampling that I already knew but had forgotten in the meantime:
1. The table level dynamic sampling hint uses a different number of blocks for sampling than the session / cursor level dynamic sampling. So even if for both for example level 5 gets used the number of sampled blocks will be different for most of the 10 levels available (obviously level 0 and 10 are exceptions)
2. The Dynamic Sampling code uses a different approach for partitioned objects if it is faced with the situation that there are more partitions than blocks to sample according to the level (and type table/cursor/session) of Dynamic Sampling
Note that all this here applies to the case where no statistics have been gathered for the table - I don't cover the case when Dynamic Sampling gets used on top of existing statistics.
Dynamic Sampling Number Of Sample Blocks
Jonathan Lewis has a short post describing 1. above, although I believe that his post has a minor inaccuracy: The number of blocks sampled for the table level dynamic sampling is 32 * 2^(level - 1) not 32 * 2^level.
Note that the constant 32 is defined by the internal parameter "_optimizer_dyn_smp_blks" and is independent from the block size. So this is one of the cases where a larger block size potentially gives better results because more data might be sampled, of course it also means performing more work for the sampling.
Here are two excerpts from optimizer trace files that show both the difference between the table and cursor/session level sample sizes as well as the 2^(level -1) formula for the table level:
Table level 5:
** Executed dynamic sampling query:
level : 5
.
.
.
max. sample block cnt. : 512
Cursor/session level 5:
** Executed dynamic sampling query:
level : 5
.
.
.
max. sample block cnt. : 64
So both cases use level 5, but the number of sample blocks is different, and for the table level 5 it is 32 * 2^4 = 32 * 16 = 512 blocks
Dynamic Sampling On Multiple Partitions
Point 2. above is also described in one of the comments to the post mentioned. In principle the Dynamic Sampling code seems to assume an overhead of one sample block per (sub)segment, so the effective number of blocks to sample will fall short by the number of (sub)segments to sample.
Probably this is based on the assumption that the segment header block needs to be accessed anyway when reading a segment.
If the code didn't cater for this fact it could potentially end up with an effective number of blocks sampled that is far greater than defined by the sample size when dealing with partitioned objects.
For non-partitioned objects this is not a big deal because it means exactly one block less than defined by the sample size.
But if Dynamic Sampling needs to sample multiple partitions this has several consequences:
a. The number of blocks that are effectively sampled for data can be far less than expected according to the number of blocks to be sampled, because the code reduces the number of blocks by the number of partitions to sample
b. The point above poses a special challenge if there are actually more partitions to sample than blocks
Note that Dynamic Sampling uses static / compile time partition pruning information to determine the number of partitions that need to be sampled.
The upshot of this is that when sampling multiple partitions the sample sizes of the lower cursor/session Dynamic Sampling levels can be far too small for reasonable sample results.
If the Dynamic Sampling code faces the situation where more partitions need to be sampled than blocks, it uses a different approach.
Rather than sampling the whole table and therefore potentially accessing more partitions than blocks defined by the sample size it will randomly select (sample blocks / 2) subsegments.
According to the number of blocks determined per subsegment it will then use a sample size such that in total (sample blocks / 2) blocks will be sampled for data.
Of course you'll appreciate that this means that on average exactly one data block will be sampled for data per subsegment.
The sample query looks different in such a case because the subsegments sampled are explicitly mentioned and combined via UNION ALL resulting in quite a lengthy statement - even with a small sample size like 32 blocks 16 queries on subsegments will be UNIONed together.
Here are again two excerpts from optimizer trace files that show the two different approaches in action:
More sample blocks than partitions:
** Performing dynamic sampling initial checks. **
** Dynamic sampling initial checks returning TRUE (level = 5).
** Dynamic sampling updated table stats.: blocks=17993
** Generated dynamic sampling query:
query text :
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false') NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0) FROM (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ 1 AS C1, 1 AS C2 FROM "T" SAMPLE BLOCK (0.711388 , 1) SEED (1) "T") SAMPLESUB
*** 2012-01-03 09:45:22.695
** Executed dynamic sampling query:
level : 5
sample pct. : 0.711388
total partitions : 384
partitions for sampling : 384
actual sample size : 7452
filtered sample card. : 7452
orig. card. : 98028
block cnt. table stat. : 17993
block cnt. for sampling: 17993
Potentially all partitions get sampled and the query used is similar to the one used for non-partitioned objects.
Less or equal blocks than partitions:
** Performing dynamic sampling initial checks. **
** Dynamic sampling initial checks returning TRUE (level = 5).
** Dynamic sampling updated table stats.: blocks=1496
*** 2012-01-03 09:44:04.492
** Generated dynamic sampling query:
query text :
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false') NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0) FROM (SELECT 1 AS C1, 1 AS C2 FROM ((SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(6) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(21) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(28) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(30) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(68) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(80) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(83) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(98) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(102) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(109) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(134) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(141) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(153) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(158) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(176) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(177) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(179) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(205) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(206) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(249) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(257) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(260) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(263) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(265) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(273) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(277) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(309) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(339) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(341) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(342) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(359) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(368) SAMPLE BLOCK (2.139037 , 1) SEED (1))) "T") SAMPLESUB
** Executed dynamic sampling query:
level : 5
sample pct. : 2.139037
total partitions : 384
partitions for sampling : 384
partitions actually sampled from : 32
actual sample size : 2583
filtered sample card. : 2583
orig. card. : 98028
block cnt. table stat. : 1496
block cnt. for sampling: 17952
partition subset block cnt. : 1496
You can clearly see that the query looks quite different by listing a number of subpartitions explicitly. Also the text dumped to the trace file is different and says that it will restrict the sampling to 32 partitions.
And it is this special case where in versions below 11.2.0.3 a silly bug in the code leads to incorrect cost estimates: When putting together the number of blocks that should be used for sampling and those that are extrapolated for the whole table the code copies the wrong number into the table stats - it uses the number of blocks to sample instead of the assumed table size. This can lead to a dramatic cost underestimate for a corresponding full table scan operation.
The issue seems to be fixed in 11.2.0.3, but you can see in above excerpt from 11.2.0.1 the problem by checking carefully these lines:
...
** Dynamic sampling updated table stats.: blocks=1496 <=== wrong number copied from below
...
block cnt. table stat. : 1496 <=== this should be on the next line
block cnt. for sampling: 17952 <=== this should be on the previous line
partition subset block cnt. : 1496
The two figures "block cnt. for sampling" and "block cnt. table stat." are swapped - and the wrong number is copied to the table stats line.
This will result in a potential underestimate of the table blocks. The first plan is generated with the session level 5 sample size where the bug copies the wrong number of blocks:
---------------------------------------+-----------------------------------+---------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time | Pstart| Pstop |
---------------------------------------+-----------------------------------+---------------+
| 0 | SELECT STATEMENT | | | | 246 | | | |
| 1 | PARTITION RANGE ALL | | 996K | 89M | 246 | 00:00:03 | 1 | 12 |
| 2 | PARTITION HASH ALL | | 996K | 89M | 246 | 00:00:03 | 1 | 32 |
| 3 | TABLE ACCESS FULL | T | 996K | 89M | 246 | 00:00:03 | 1 | 384 |
---------------------------------------+-----------------------------------+---------------+
The second plan is generated for the same data set but using the table level 5 sample size that results in using the different code path that is not affected by the bug:
---------------------------------------+-----------------------------------+---------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time | Pstart| Pstop |
---------------------------------------+-----------------------------------+---------------+
| 0 | SELECT STATEMENT | | | | 3637 | | | |
| 1 | PARTITION RANGE ALL | | 970K | 86M | 3637 | 00:00:44 | 1 | 12 |
| 2 | PARTITION HASH ALL | | 970K | 86M | 3637 | 00:00:44 | 1 | 32 |
| 3 | TABLE ACCESS FULL | T | 970K | 86M | 3637 | 00:00:44 | 1 | 384 |
---------------------------------------+-----------------------------------+---------------+
Note that although a minor discrepancy might be explained by the different sample sizes a cost estimate difference by an order of magnitude is clearly questionable.
Nasty Bug When Using Indexes
Finally there is another nasty bug waiting for you in the case of partitioned objects - and this time it doesn't matter if the number of partitions is more or less than the number of blocks to be sampled:
Dynamic Sampling will also make use of eligible indexes if a filter predicate is applied to a table and a suitable index exists (which probably means that it starts with the predicates applied but I haven't investigated that to a full extent).
The idea behind this is probably that by using the index a very cheap operation can be used to obtain a very precise selectivity estimate for highly selective predicates. Dynamic Sampling has some built-in sanity checks that reject the Dynamic Sampling result if not a reasonable number of rows pass the filter predicates applied - similar to saying "not enough data found to provide a reasonable estimate". So in case the filter predicates identify only a few rows out of many it requires a pretty high sample level in order to have the Dynamic Sampling results not rejected by these sanity checks.
Things look different however if there is a suitable index available: Dynamic Sampling will run an additional index-only query that is limited to a small number of rows (2,500 rows seems to be a common number) and a where clause corresponding to the filter predicates. If the number of rows returned by this query is less than 2,500 Dynamic Sampling knows that this corresponds exactly to the cardinality / selectivity of the filter predicates.
In case of partitioned objects though there is again a silly bug where the case of 100% matching rows is not handled correctly - so for any filter predicate that matches more than 2,500 rows the cardinality / selectivity estimate will be potentially incorrect.
Here are again two optimizer trace excerpts that show the bug in action:
Without a suitable index the cardinality estimate for a not really selective predicate (90%) is in the right ballpark:
** Dynamic sampling initial checks returning TRUE (level = 5).
** Dynamic sampling updated table stats.: blocks=1585
*** 2012-01-09 09:53:13.651
** Generated dynamic sampling query:
query text :
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false') NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0) FROM (SELECT /*+ IGNORE_WHERE_CLAUSE */ 1 AS C1, CASE WHEN "T"."ID">100000 THEN 1 ELSE 0 END AS C2 FROM ((SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(5) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(20) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(27) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(29) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(67) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(79) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(82) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(97) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(101) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(108) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(133) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(140) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(152) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(157) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(175) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(176) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(178) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(204) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(205) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(248) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(256) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(259) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(262) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(264) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(272) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(276) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(308) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(338) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(340) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(341) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(358) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(367) SAMPLE BLOCK (2.018927 , 1) SEED (1))) "T") SAMPLESUB
*** 2012-01-09 09:53:13.869
** Executed dynamic sampling query:
level : 5
sample pct. : 2.018927
total partitions : 384
partitions for sampling : 384
partitions actually sampled from : 32
actual sample size : 2063
filtered sample card. : 2003
orig. card. : 98028
block cnt. table stat. : 1585
block cnt. for sampling: 19020
partition subset block cnt. : 1585
max. sample block cnt. : 64
sample block cnt. : 32
min. sel. est. : 0.05000000
** Using dynamic sampling card. : 1226196
** Dynamic sampling updated table card.
** Using single table dynamic sel. est. : 0.97091614
Table: T Alias: T
Card: Original: 1226196 Rounded: 1190533 Computed: 1190533.13 Non Adjusted: 1190533.13
.
.
.
---------------------------------------+-----------------------------------+---------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time | Pstart| Pstop |
---------------------------------------+-----------------------------------+---------------+
| 0 | SELECT STATEMENT | | | | 360 | | | |
| 1 | PARTITION RANGE ALL | | 1163K | 103M | 360 | 00:00:05 | 1 | 12 |
| 2 | PARTITION HASH ALL | | 1163K | 103M | 360 | 00:00:05 | 1 | 32 |
| 3 | TABLE ACCESS FULL | T | 1163K | 103M | 360 | 00:00:05 | 1 | 384 |
---------------------------------------+-----------------------------------+---------------+
Predicate Information:
----------------------
3 - filter("ID">100000)
With a suitable index in place the cardinality is estimated at 2,500 for the same data set:
** Dynamic sampling initial checks returning TRUE (level = 5).
** Dynamic sampling updated index stats.: T_IDX, blocks=3840
** Dynamic sampling index access candidate : T_IDX
** Dynamic sampling updated table stats.: blocks=1585
*** 2012-01-09 10:01:32.960
** Generated dynamic sampling query:
query text :
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false') NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0) FROM (SELECT /*+ IGNORE_WHERE_CLAUSE */ 1 AS C1, CASE WHEN "T"."ID">100000 THEN 1 ELSE 0 END AS C2 FROM ((SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(5) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(20) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(27) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(29) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(67) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(79) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(82) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(97) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(101) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(108) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(133) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(140) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(152) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(157) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(175) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(176) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(178) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(204) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(205) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(248) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(256) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(259) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(262) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(264) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(272) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(276) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(308) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(338) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(340) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(341) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(358) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(367) SAMPLE BLOCK (2.018927 , 1) SEED (1))) "T") SAMPLESUB
*** 2012-01-09 10:01:33.100
** Executed dynamic sampling query:
level : 5
sample pct. : 2.018927
total partitions : 384
partitions for sampling : 384
partitions actually sampled from : 32
actual sample size : 2063
filtered sample card. : 2003
orig. card. : 98028
block cnt. table stat. : 1585
block cnt. for sampling: 19020
partition subset block cnt. : 1585
max. sample block cnt. : 64
sample block cnt. : 32
min. sel. est. : 0.05000000
** Using recursive dynamic sampling card. est. : 1226195.625000
*** 2012-01-09 10:01:33.163
** Generated dynamic sampling query:
query text :
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS opt_param('parallel_execution_enabled', 'false') NO_PARALLEL(SAMPLESUB) NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0), NVL(SUM(C3),0) FROM (SELECT /*+ NO_PARALLEL("T") INDEX("T" T_IDX) NO_PARALLEL_INDEX("T") */ 1 AS C1, 1 AS C2, 1 AS C3 FROM "T" "T" WHERE "T"."ID">100000 AND ROWNUM <= 2500) SAMPLESUB
*** 2012-01-09 10:01:33.179
** Executed dynamic sampling query:
level : 5
sample pct. : 100.000000
total partitions : 384
partitions for sampling : 384
actual sample size : 1226196
filtered sample card. : 2500
filtered sample card. (index T_IDX): 2500
orig. card. : 1226196
block cnt. table stat. : 1585
block cnt. for sampling: 1585
max. sample block cnt. : 4294967295
sample block cnt. : 1585
min. sel. est. : 0.05000000
** Increasing dynamic sampling selectivity
for predicate 0 from 0.002039 to 0.970916.
** Increasing dynamic sampling selectivity
for predicate 1 from 0.002039 to 0.970916.
index T_IDX selectivity est.: 0.00203883
** Using dynamic sampling card. : 1226196
** Dynamic sampling updated table card.
** Using single table dynamic sel. est. : 0.00203883
Table: T Alias: T
Card: Original: 1226196 Rounded: 2500 Computed: 2500.00 Non Adjusted: 2500.00
.
.
.
-------------------------------------------------------+-----------------------------------+---------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time | Pstart| Pstop |
-------------------------------------------------------+-----------------------------------+---------------+
| 0 | SELECT STATEMENT | | | | 55 | | | |
| 1 | PARTITION RANGE ALL | | 2500 | 222K | 55 | 00:00:01 | 1 | 12 |
| 2 | PARTITION HASH ALL | | 2500 | 222K | 55 | 00:00:01 | 1 | 32 |
| 3 | TABLE ACCESS BY LOCAL INDEX ROWID | T | 2500 | 222K | 55 | 00:00:01 | 1 | 384 |
| 4 | INDEX RANGE SCAN | T_IDX | 2500 | | 20 | 00:00:01 | 1 | 384 |
-------------------------------------------------------+-----------------------------------+---------------+
Predicate Information:
----------------------
4 - access("ID">100000)
Again it can be seen from these lines:
** Increasing dynamic sampling selectivity
for predicate 0 from 0.002039 to 0.970916.
** Increasing dynamic sampling selectivity
for predicate 1 from 0.002039 to 0.970916.
index T_IDX selectivity est.: 0.00203883
that in principle the selectivity estimate from the table level operation is supposed to be used but finally the wrong selectivity gets copied over which is then echoed by the final execution plan.
This bug is tracked with bug "6408301: Bad cardinality estimate from dynamic sampling for indexes on partitioned table" and patches are available. The issue is fixed in 11.2.0.2, but the "wrong number of table blocks" issue is only fixed in 11.2.0.3. I don't have a bug number at hand for that bug, though.
Summary
If you plan to use Dynamic Sampling on partitioned objects with many partitions where the number of partitions to sample cannot be significantly limited by partition pruning the result of Dynamic Sampling might be questionable for lower levels.
In addition there is a bug that leads to wrong cost estimates for a full segment scan operation that is only fixed in the most recent releases.
It probably makes sense to use higher Dynamic Sampling levels in such cases - the side effect of this is not only more reasonable sampling results but it might also allow to avoid the mentioned bug if the number of blocks sampled is greater than the number of partitions to sample.
Be aware of the case where an index can be used by Dynamic Sampling in addition - for partitioned objects a bug might lead to dramatic underestimates of the cardinality.
Testcase Script
The issues described here can easily reproduced by using the following simple test case:
drop table t;
purge table t;
create table t
partition by range (pkey)
subpartition by hash (hash_id) subpartitions 32
(
partition pkey_1 values less than (2)
, partition pkey_2 values less than (3)
, partition pkey_3 values less than (4)
, partition pkey_4 values less than (5)
, partition pkey_5 values less than (6)
, partition pkey_6 values less than (7)
, partition pkey_7 values less than (8)
, partition pkey_8 values less than (9)
, partition pkey_9 values less than (10)
, partition pkey_10 values less than (11)
, partition pkey_11 values less than (12)
, partition pkey_12 values less than (13)
)
storage (initial 64k)
as
select
rownum as id
, mod(rownum, 12) + 1 as pkey
--, 12 as pkey
--, 1 as hash_id
, rownum as hash_id
, rpad('x', 100) as filler
from
dual
connect by
level <= 1000000
;
set echo on time on
alter session set optimizer_dynamic_sampling = 5;
alter session set tracefile_identifier = 'composite_part_dyn_samp';
alter session set events '10053 trace name context forever, level 1';
explain plan
for
select * from t
;
explain plan
for
select /*+ dynamic_sampling(t 5) */ * from t
;
alter session set tracefile_identifier = 'composite_part_dyn_samp_where';
explain plan
for
select /*+ dynamic_sampling(t 5) */ * from t
where id > 100000
;
alter session set tracefile_identifier = 'dummy';
create index t_idx on t (id) global;
alter session set tracefile_identifier = 'composite_part_dyn_samp_index';
explain plan
for
select /*+ dynamic_sampling(t 5) */ * from t
where id > 100000
;
1. The table level dynamic sampling hint uses a different number of blocks for sampling than the session / cursor level dynamic sampling. So even if for both for example level 5 gets used the number of sampled blocks will be different for most of the 10 levels available (obviously level 0 and 10 are exceptions)
2. The Dynamic Sampling code uses a different approach for partitioned objects if it is faced with the situation that there are more partitions than blocks to sample according to the level (and type table/cursor/session) of Dynamic Sampling
Note that all this here applies to the case where no statistics have been gathered for the table - I don't cover the case when Dynamic Sampling gets used on top of existing statistics.
Dynamic Sampling Number Of Sample Blocks
Jonathan Lewis has a short post describing 1. above, although I believe that his post has a minor inaccuracy: The number of blocks sampled for the table level dynamic sampling is 32 * 2^(level - 1) not 32 * 2^level.
Note that the constant 32 is defined by the internal parameter "_optimizer_dyn_smp_blks" and is independent from the block size. So this is one of the cases where a larger block size potentially gives better results because more data might be sampled, of course it also means performing more work for the sampling.
Here are two excerpts from optimizer trace files that show both the difference between the table and cursor/session level sample sizes as well as the 2^(level -1) formula for the table level:
Table level 5:
** Executed dynamic sampling query:
level : 5
.
.
.
max. sample block cnt. : 512
Cursor/session level 5:
** Executed dynamic sampling query:
level : 5
.
.
.
max. sample block cnt. : 64
So both cases use level 5, but the number of sample blocks is different, and for the table level 5 it is 32 * 2^4 = 32 * 16 = 512 blocks
Dynamic Sampling On Multiple Partitions
Point 2. above is also described in one of the comments to the post mentioned. In principle the Dynamic Sampling code seems to assume an overhead of one sample block per (sub)segment, so the effective number of blocks to sample will fall short by the number of (sub)segments to sample.
Probably this is based on the assumption that the segment header block needs to be accessed anyway when reading a segment.
If the code didn't cater for this fact it could potentially end up with an effective number of blocks sampled that is far greater than defined by the sample size when dealing with partitioned objects.
For non-partitioned objects this is not a big deal because it means exactly one block less than defined by the sample size.
But if Dynamic Sampling needs to sample multiple partitions this has several consequences:
a. The number of blocks that are effectively sampled for data can be far less than expected according to the number of blocks to be sampled, because the code reduces the number of blocks by the number of partitions to sample
b. The point above poses a special challenge if there are actually more partitions to sample than blocks
Note that Dynamic Sampling uses static / compile time partition pruning information to determine the number of partitions that need to be sampled.
The upshot of this is that when sampling multiple partitions the sample sizes of the lower cursor/session Dynamic Sampling levels can be far too small for reasonable sample results.
If the Dynamic Sampling code faces the situation where more partitions need to be sampled than blocks, it uses a different approach.
Rather than sampling the whole table and therefore potentially accessing more partitions than blocks defined by the sample size it will randomly select (sample blocks / 2) subsegments.
According to the number of blocks determined per subsegment it will then use a sample size such that in total (sample blocks / 2) blocks will be sampled for data.
Of course you'll appreciate that this means that on average exactly one data block will be sampled for data per subsegment.
The sample query looks different in such a case because the subsegments sampled are explicitly mentioned and combined via UNION ALL resulting in quite a lengthy statement - even with a small sample size like 32 blocks 16 queries on subsegments will be UNIONed together.
Here are again two excerpts from optimizer trace files that show the two different approaches in action:
More sample blocks than partitions:
** Performing dynamic sampling initial checks. **
** Dynamic sampling initial checks returning TRUE (level = 5).
** Dynamic sampling updated table stats.: blocks=17993
** Generated dynamic sampling query:
query text :
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false') NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0) FROM (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ 1 AS C1, 1 AS C2 FROM "T" SAMPLE BLOCK (0.711388 , 1) SEED (1) "T") SAMPLESUB
*** 2012-01-03 09:45:22.695
** Executed dynamic sampling query:
level : 5
sample pct. : 0.711388
total partitions : 384
partitions for sampling : 384
actual sample size : 7452
filtered sample card. : 7452
orig. card. : 98028
block cnt. table stat. : 17993
block cnt. for sampling: 17993
Potentially all partitions get sampled and the query used is similar to the one used for non-partitioned objects.
Less or equal blocks than partitions:
** Performing dynamic sampling initial checks. **
** Dynamic sampling initial checks returning TRUE (level = 5).
** Dynamic sampling updated table stats.: blocks=1496
*** 2012-01-03 09:44:04.492
** Generated dynamic sampling query:
query text :
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false') NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0) FROM (SELECT 1 AS C1, 1 AS C2 FROM ((SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(6) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(21) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(28) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(30) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(68) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(80) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(83) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(98) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(102) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(109) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(134) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(141) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(153) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(158) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(176) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(177) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(179) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(205) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(206) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(249) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(257) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(260) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(263) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(265) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(273) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(277) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(309) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(339) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(341) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(342) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(359) SAMPLE BLOCK (2.139037 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(368) SAMPLE BLOCK (2.139037 , 1) SEED (1))) "T") SAMPLESUB
** Executed dynamic sampling query:
level : 5
sample pct. : 2.139037
total partitions : 384
partitions for sampling : 384
partitions actually sampled from : 32
actual sample size : 2583
filtered sample card. : 2583
orig. card. : 98028
block cnt. table stat. : 1496
block cnt. for sampling: 17952
partition subset block cnt. : 1496
You can clearly see that the query looks quite different by listing a number of subpartitions explicitly. Also the text dumped to the trace file is different and says that it will restrict the sampling to 32 partitions.
And it is this special case where in versions below 11.2.0.3 a silly bug in the code leads to incorrect cost estimates: When putting together the number of blocks that should be used for sampling and those that are extrapolated for the whole table the code copies the wrong number into the table stats - it uses the number of blocks to sample instead of the assumed table size. This can lead to a dramatic cost underestimate for a corresponding full table scan operation.
The issue seems to be fixed in 11.2.0.3, but you can see in above excerpt from 11.2.0.1 the problem by checking carefully these lines:
...
** Dynamic sampling updated table stats.: blocks=1496 <=== wrong number copied from below
...
block cnt. table stat. : 1496 <=== this should be on the next line
block cnt. for sampling: 17952 <=== this should be on the previous line
partition subset block cnt. : 1496
The two figures "block cnt. for sampling" and "block cnt. table stat." are swapped - and the wrong number is copied to the table stats line.
This will result in a potential underestimate of the table blocks. The first plan is generated with the session level 5 sample size where the bug copies the wrong number of blocks:
---------------------------------------+-----------------------------------+---------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time | Pstart| Pstop |
---------------------------------------+-----------------------------------+---------------+
| 0 | SELECT STATEMENT | | | | 246 | | | |
| 1 | PARTITION RANGE ALL | | 996K | 89M | 246 | 00:00:03 | 1 | 12 |
| 2 | PARTITION HASH ALL | | 996K | 89M | 246 | 00:00:03 | 1 | 32 |
| 3 | TABLE ACCESS FULL | T | 996K | 89M | 246 | 00:00:03 | 1 | 384 |
---------------------------------------+-----------------------------------+---------------+
The second plan is generated for the same data set but using the table level 5 sample size that results in using the different code path that is not affected by the bug:
---------------------------------------+-----------------------------------+---------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time | Pstart| Pstop |
---------------------------------------+-----------------------------------+---------------+
| 0 | SELECT STATEMENT | | | | 3637 | | | |
| 1 | PARTITION RANGE ALL | | 970K | 86M | 3637 | 00:00:44 | 1 | 12 |
| 2 | PARTITION HASH ALL | | 970K | 86M | 3637 | 00:00:44 | 1 | 32 |
| 3 | TABLE ACCESS FULL | T | 970K | 86M | 3637 | 00:00:44 | 1 | 384 |
---------------------------------------+-----------------------------------+---------------+
Note that although a minor discrepancy might be explained by the different sample sizes a cost estimate difference by an order of magnitude is clearly questionable.
Nasty Bug When Using Indexes
Finally there is another nasty bug waiting for you in the case of partitioned objects - and this time it doesn't matter if the number of partitions is more or less than the number of blocks to be sampled:
Dynamic Sampling will also make use of eligible indexes if a filter predicate is applied to a table and a suitable index exists (which probably means that it starts with the predicates applied but I haven't investigated that to a full extent).
The idea behind this is probably that by using the index a very cheap operation can be used to obtain a very precise selectivity estimate for highly selective predicates. Dynamic Sampling has some built-in sanity checks that reject the Dynamic Sampling result if not a reasonable number of rows pass the filter predicates applied - similar to saying "not enough data found to provide a reasonable estimate". So in case the filter predicates identify only a few rows out of many it requires a pretty high sample level in order to have the Dynamic Sampling results not rejected by these sanity checks.
Things look different however if there is a suitable index available: Dynamic Sampling will run an additional index-only query that is limited to a small number of rows (2,500 rows seems to be a common number) and a where clause corresponding to the filter predicates. If the number of rows returned by this query is less than 2,500 Dynamic Sampling knows that this corresponds exactly to the cardinality / selectivity of the filter predicates.
In case of partitioned objects though there is again a silly bug where the case of 100% matching rows is not handled correctly - so for any filter predicate that matches more than 2,500 rows the cardinality / selectivity estimate will be potentially incorrect.
Here are again two optimizer trace excerpts that show the bug in action:
Without a suitable index the cardinality estimate for a not really selective predicate (90%) is in the right ballpark:
** Dynamic sampling initial checks returning TRUE (level = 5).
** Dynamic sampling updated table stats.: blocks=1585
*** 2012-01-09 09:53:13.651
** Generated dynamic sampling query:
query text :
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false') NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0) FROM (SELECT /*+ IGNORE_WHERE_CLAUSE */ 1 AS C1, CASE WHEN "T"."ID">100000 THEN 1 ELSE 0 END AS C2 FROM ((SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(5) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(20) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(27) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(29) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(67) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(79) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(82) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(97) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(101) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(108) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(133) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(140) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(152) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(157) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(175) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(176) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(178) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(204) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(205) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(248) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(256) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(259) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(262) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(264) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(272) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(276) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(308) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(338) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(340) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(341) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(358) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(367) SAMPLE BLOCK (2.018927 , 1) SEED (1))) "T") SAMPLESUB
*** 2012-01-09 09:53:13.869
** Executed dynamic sampling query:
level : 5
sample pct. : 2.018927
total partitions : 384
partitions for sampling : 384
partitions actually sampled from : 32
actual sample size : 2063
filtered sample card. : 2003
orig. card. : 98028
block cnt. table stat. : 1585
block cnt. for sampling: 19020
partition subset block cnt. : 1585
max. sample block cnt. : 64
sample block cnt. : 32
min. sel. est. : 0.05000000
** Using dynamic sampling card. : 1226196
** Dynamic sampling updated table card.
** Using single table dynamic sel. est. : 0.97091614
Table: T Alias: T
Card: Original: 1226196 Rounded: 1190533 Computed: 1190533.13 Non Adjusted: 1190533.13
.
.
.
---------------------------------------+-----------------------------------+---------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time | Pstart| Pstop |
---------------------------------------+-----------------------------------+---------------+
| 0 | SELECT STATEMENT | | | | 360 | | | |
| 1 | PARTITION RANGE ALL | | 1163K | 103M | 360 | 00:00:05 | 1 | 12 |
| 2 | PARTITION HASH ALL | | 1163K | 103M | 360 | 00:00:05 | 1 | 32 |
| 3 | TABLE ACCESS FULL | T | 1163K | 103M | 360 | 00:00:05 | 1 | 384 |
---------------------------------------+-----------------------------------+---------------+
Predicate Information:
----------------------
3 - filter("ID">100000)
With a suitable index in place the cardinality is estimated at 2,500 for the same data set:
** Dynamic sampling initial checks returning TRUE (level = 5).
** Dynamic sampling updated index stats.: T_IDX, blocks=3840
** Dynamic sampling index access candidate : T_IDX
** Dynamic sampling updated table stats.: blocks=1585
*** 2012-01-09 10:01:32.960
** Generated dynamic sampling query:
query text :
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false') NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0) FROM (SELECT /*+ IGNORE_WHERE_CLAUSE */ 1 AS C1, CASE WHEN "T"."ID">100000 THEN 1 ELSE 0 END AS C2 FROM ((SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(5) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(20) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(27) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(29) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(67) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(79) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(82) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(97) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(101) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(108) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(133) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(140) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(152) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(157) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(175) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(176) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(178) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(204) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(205) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(248) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(256) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(259) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(262) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(264) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(272) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(276) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(308) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(338) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(340) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(341) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(358) SAMPLE BLOCK (2.018927 , 1) SEED (1)) UNION ALL (SELECT /*+ NO_PARALLEL("T") FULL("T") NO_PARALLEL_INDEX("T") */ * FROM "T" SUBPARTITION(367) SAMPLE BLOCK (2.018927 , 1) SEED (1))) "T") SAMPLESUB
*** 2012-01-09 10:01:33.100
** Executed dynamic sampling query:
level : 5
sample pct. : 2.018927
total partitions : 384
partitions for sampling : 384
partitions actually sampled from : 32
actual sample size : 2063
filtered sample card. : 2003
orig. card. : 98028
block cnt. table stat. : 1585
block cnt. for sampling: 19020
partition subset block cnt. : 1585
max. sample block cnt. : 64
sample block cnt. : 32
min. sel. est. : 0.05000000
** Using recursive dynamic sampling card. est. : 1226195.625000
*** 2012-01-09 10:01:33.163
** Generated dynamic sampling query:
query text :
SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS opt_param('parallel_execution_enabled', 'false') NO_PARALLEL(SAMPLESUB) NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0), NVL(SUM(C3),0) FROM (SELECT /*+ NO_PARALLEL("T") INDEX("T" T_IDX) NO_PARALLEL_INDEX("T") */ 1 AS C1, 1 AS C2, 1 AS C3 FROM "T" "T" WHERE "T"."ID">100000 AND ROWNUM <= 2500) SAMPLESUB
*** 2012-01-09 10:01:33.179
** Executed dynamic sampling query:
level : 5
sample pct. : 100.000000
total partitions : 384
partitions for sampling : 384
actual sample size : 1226196
filtered sample card. : 2500
filtered sample card. (index T_IDX): 2500
orig. card. : 1226196
block cnt. table stat. : 1585
block cnt. for sampling: 1585
max. sample block cnt. : 4294967295
sample block cnt. : 1585
min. sel. est. : 0.05000000
** Increasing dynamic sampling selectivity
for predicate 0 from 0.002039 to 0.970916.
** Increasing dynamic sampling selectivity
for predicate 1 from 0.002039 to 0.970916.
index T_IDX selectivity est.: 0.00203883
** Using dynamic sampling card. : 1226196
** Dynamic sampling updated table card.
** Using single table dynamic sel. est. : 0.00203883
Table: T Alias: T
Card: Original: 1226196 Rounded: 2500 Computed: 2500.00 Non Adjusted: 2500.00
.
.
.
-------------------------------------------------------+-----------------------------------+---------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time | Pstart| Pstop |
-------------------------------------------------------+-----------------------------------+---------------+
| 0 | SELECT STATEMENT | | | | 55 | | | |
| 1 | PARTITION RANGE ALL | | 2500 | 222K | 55 | 00:00:01 | 1 | 12 |
| 2 | PARTITION HASH ALL | | 2500 | 222K | 55 | 00:00:01 | 1 | 32 |
| 3 | TABLE ACCESS BY LOCAL INDEX ROWID | T | 2500 | 222K | 55 | 00:00:01 | 1 | 384 |
| 4 | INDEX RANGE SCAN | T_IDX | 2500 | | 20 | 00:00:01 | 1 | 384 |
-------------------------------------------------------+-----------------------------------+---------------+
Predicate Information:
----------------------
4 - access("ID">100000)
Again it can be seen from these lines:
** Increasing dynamic sampling selectivity
for predicate 0 from 0.002039 to 0.970916.
** Increasing dynamic sampling selectivity
for predicate 1 from 0.002039 to 0.970916.
index T_IDX selectivity est.: 0.00203883
that in principle the selectivity estimate from the table level operation is supposed to be used but finally the wrong selectivity gets copied over which is then echoed by the final execution plan.
This bug is tracked with bug "6408301: Bad cardinality estimate from dynamic sampling for indexes on partitioned table" and patches are available. The issue is fixed in 11.2.0.2, but the "wrong number of table blocks" issue is only fixed in 11.2.0.3. I don't have a bug number at hand for that bug, though.
Summary
If you plan to use Dynamic Sampling on partitioned objects with many partitions where the number of partitions to sample cannot be significantly limited by partition pruning the result of Dynamic Sampling might be questionable for lower levels.
In addition there is a bug that leads to wrong cost estimates for a full segment scan operation that is only fixed in the most recent releases.
It probably makes sense to use higher Dynamic Sampling levels in such cases - the side effect of this is not only more reasonable sampling results but it might also allow to avoid the mentioned bug if the number of blocks sampled is greater than the number of partitions to sample.
Be aware of the case where an index can be used by Dynamic Sampling in addition - for partitioned objects a bug might lead to dramatic underestimates of the cardinality.
Testcase Script
The issues described here can easily reproduced by using the following simple test case:
drop table t;
purge table t;
create table t
partition by range (pkey)
subpartition by hash (hash_id) subpartitions 32
(
partition pkey_1 values less than (2)
, partition pkey_2 values less than (3)
, partition pkey_3 values less than (4)
, partition pkey_4 values less than (5)
, partition pkey_5 values less than (6)
, partition pkey_6 values less than (7)
, partition pkey_7 values less than (8)
, partition pkey_8 values less than (9)
, partition pkey_9 values less than (10)
, partition pkey_10 values less than (11)
, partition pkey_11 values less than (12)
, partition pkey_12 values less than (13)
)
storage (initial 64k)
as
select
rownum as id
, mod(rownum, 12) + 1 as pkey
--, 12 as pkey
--, 1 as hash_id
, rownum as hash_id
, rpad('x', 100) as filler
from
dual
connect by
level <= 1000000
;
set echo on time on
alter session set optimizer_dynamic_sampling = 5;
alter session set tracefile_identifier = 'composite_part_dyn_samp';
alter session set events '10053 trace name context forever, level 1';
explain plan
for
select * from t
;
explain plan
for
select /*+ dynamic_sampling(t 5) */ * from t
;
alter session set tracefile_identifier = 'composite_part_dyn_samp_where';
explain plan
for
select /*+ dynamic_sampling(t 5) */ * from t
where id > 100000
;
alter session set tracefile_identifier = 'dummy';
create index t_idx on t (id) global;
alter session set tracefile_identifier = 'composite_part_dyn_samp_index';
explain plan
for
select /*+ dynamic_sampling(t 5) */ * from t
where id > 100000
;
Labels:
10.2.0.4,
10.2.0.5,
10gR2,
11.2.0.1,
11.2.0.2,
11g,
bug,
cardinality,
CBO,
dynamic sampling,
Partitioning
Subscribe to:
Posts (Atom)