Monday, April 27, 2015

Enabling Edition Based Redefinition On A Schema With Object-Relational Tables

This is just a heads-up for those thinking about using Edition Based Redefinition (EBR) and enabling it on an existing schema with objects. Although EBR isn't exactly a new feature its current adoption level is probably not that high (which probably changes in future as Oracle E-Business Suite uses EBR now as part of their default upgrade procedure as far as I understood).

I was recently contacted by someone who enabled EBR on an existing schema using ALTER USER ... ENABLE EDITIONS and had to use the "FORCE" option since there were (according to the official ALTER USER documentation) "objects that are not editionable and that depend on editionable type objects in the schema. ... In this case, all the objects that are not editionable and that depend on the editionable type objects in the schema being editions-enabled become invalid".

Although one could say it is clearly mentioned in the documentation, the consequences are probably not that obvious to everyone if those non-editionable objects are tables having columns based on user-defined types. So I state it here to make it hopefully clear enough:

If you use the FORCE option of ALTER USER ... ENABLE EDITIONS to enable editions on an existing schema already containing objects and among those objects are tables having columns based on user-defined types, then effectively those tables will become invalid and stay invalid. There is no officially documented way to reverse this step or compile tables to become valid again (there's no ALTER TABLE COMPILE or similar). The table cannot be accessed any longer and all data contained is hence officially gone, too.

This means the affected tables need to be restored from a (hopefully existing and usable) backup (I don't know if there's a dictionary hack available that is officially sanctioned by Oracle support to make the table valid again).

In my opinion the FORCE option should check if the schema contains such dependent tables and in such a case error out with an error message that the table needs to be dropped first (or moved to a different schema) before ENABLE EDITIONS can succeed. This would make the situation much clearer, rather than leaving tables in INVALID state behind that cannot be fixed/recompiled afterwards.

Below is a simple test case that demonstrates the issue:
-- Create a test user for enabling editions
drop user ebr_test cascade;

create user ebr_test identified by ebr_test;

grant dba to ebr_test;

-- Connect as test user
connect ebr_test/ebr_test

-- and create a object relational type 
create type test_coll_type as table of number;

-- and a table having such a column of that type
create table test_ebr_table (col1 number, col2 number, test_coll test_coll_type) nested table test_coll store as test_coll_table;

-- Some test data
insert into test_ebr_table (col1, col2, test_coll) values (1, 1, test_coll_type(1, 2 ,3));

commit;

-- Everything is fine so far
select * from test_ebr_table, table(test_coll);

-- Enable editions, FORCE is required
alter user ebr_test enable editions force;

-- This no longer works (ORA-04063: table "EBR_TEST.TEST_EBR_TABLE" has errors)
select * from test_ebr_table, table(test_coll);

-- Not even simple scalar values can be accessed from the table
select col1 from test_ebr_table;

-- The table has gone INVALID
select status from user_objects where object_name = 'TEST_EBR_TABLE';

Monday, April 20, 2015

Function-Based Indexes And CURSOR_SHARING = FORCE

In general it is known that Function-Based Indexes (FBIs) can no longer be used by the optimizer if the expression contains literals and CURSOR_SHARING = FORCE / SIMILAR (deprecated) turns those literals into bind variables. Jonathan Lewis described the issue quite a while ago here in detail.

In a recent OTN thread this issue was raised again, but to my surprise when I played around with a test case that mimicked the OP's problem query I found that (certain) Oracle versions have some built-in logic that enable FBI usage for certain cases where you would expect them to be not usable.

If you test the following code on versions from 10.2.0.4 (possibly earlier) up to and including version 11.2.0.3 then you'll notice some interesting details:

create table t
as
select * from all_objects;

create index t_idx on t (owner || ' ' || object_name);

exec dbms_stats.gather_table_stats(null, 't')

set echo on linesize 200 pagesize 0

alter session set cursor_sharing = force;

select /*+ full(t) */ * from t where owner || ' ' || object_name = 'BLA';

select * from table(dbms_xplan.display_cursor);

select /*+ index(t) */ * from t where owner || ' ' || object_name = 'BLA';

select * from table(dbms_xplan.display_cursor);

select /*+ index(t) */ * from t where owner || 'A' || object_name = 'BLA';

select * from table(dbms_xplan.display_cursor);
Here is the relevant output I got from 11.2.0.1 for example:
SQL> alter session set cursor_sharing = force;

Session altered.

SQL>
SQL> select /*+ full(t) */ * from t where owner || ' ' || object_name = 'BLA';

no rows selected

SQL>
SQL> select * from table(dbms_xplan.display_cursor);
SQL_ID  ar3tw7r1rvawk, child number 0
-------------------------------------
select /*+ full(t) */ * from t where owner || :"SYS_B_0" || object_name
= :"SYS_B_1"

Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |   284 (100)|          |
|*  1 |  TABLE ACCESS FULL| T    |     1 |   117 |   284   (2)| 00:00:04 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("OWNER"||' '||"OBJECT_NAME"=:SYS_B_1)


19 rows selected.

SQL>
SQL> select /*+ index(t) */ * from t where owner || ' ' || object_name = 'BLA';

no rows selected

SQL>
SQL> select * from table(dbms_xplan.display_cursor);
SQL_ID  6kzz3vw5x8x3b, child number 0
-------------------------------------
select /*+ index(t) */ * from t where owner || :"SYS_B_0" ||
object_name = :"SYS_B_1"

Plan hash value: 470836197

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |       |       |     4 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |     1 |   117 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T_IDX |     1 |       |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("T"."SYS_NC00016$"=:SYS_B_1)


20 rows selected.

SQL>
SQL> select /*+ index(t) */ * from t where owner || 'A' || object_name = 'BLA';

no rows selected

SQL>
SQL> select * from table(dbms_xplan.display_cursor);
SQL_ID  6kzz3vw5x8x3b, child number 1
-------------------------------------
select /*+ index(t) */ * from t where owner || :"SYS_B_0" ||
object_name = :"SYS_B_1"

Plan hash value: 3778778741

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |       |       | 52472 (100)|          |
|*  1 |  TABLE ACCESS BY INDEX ROWID| T     |   724 | 84708 | 52472   (1)| 00:10:30 |
|   2 |   INDEX FULL SCAN           | T_IDX | 72351 |       |   420   (1)| 00:00:06 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("OWNER"||:SYS_B_0||"OBJECT_NAME"=:SYS_B_1)


20 rows selected.
Looking at the statement text that results from "CURSOR_SHARING = force" we can spot the expected bind variables instead of the literals, and this should result in a corresponding predicate that doesn't match the FBI expression. However, when looking at the filter expression in the predicate section (when forcing a full table scan) we can spot something interesting: It still shows the literal, which doesn't correspond to the predicate of the rewritten query text.

The next execution shows that the FBI really can be used despite the bind variable replacement taking place, and the final execution shows that the cursor sharing works correctly in that sense that a new child cursor got created for the same SQL text with a different plan and different predicate section when using a different literal in the original SQL text. V$SQL_SHARED_CURSOR shows "HASH_MATCH_FAILED" which is described as "No existing child cursors have the unsafe literal bind hash values required by the current cursor", which makes sense and probably means that the corresponding bind variable is marked as "unsafe" internally.

This optimisation shows only up if there is a suitable FBI - if there's no corresponding expression the SQL text and predicate section match. Furthermore it only supports certain expressions - Jonathan's example shows that in general it's true that these rewrites prevent FBI usage. And obviously it ceases to work in 11.2.0.4 and 12c. Whether this is a bug or a feature I don't know, but since it only seems to apply to certain expressions it's probably not that relevant anyway.

As Jonathan points out in his note you can always work around the general problem by hiding the expression in a view, and since 11g of course a proper virtual column definition is the better approach, which doesn't expose this problem either.

Even better would be the proper usage of bind variables and not using forced cursor sharing, but there are still many installations out there that rely on that feature.

Monday, April 13, 2015

Combined ACCESS And FILTER Predicates - Excessive Throw-Away

Catchy title... Let's assume the following data setup:
create table t1
as
select
        rownum as id
      , 1 as id2
      , rpad('x', 100) as filler
from
        dual
connect by
        level <= 1e4
;

create table t2
as
select
        rownum as id
      , 1 as id2
      , rpad('x', 100) as filler
from
        dual
connect by
        level <= 1e4
;

create table t3
as
select
        rownum as id
      , 1 as id2
      , rpad('x', 100) as filler
from
        dual
connect by
        level <= 1e4
;

exec dbms_stats.gather_table_stats(null, 't1')

exec dbms_stats.gather_table_stats(null, 't2')

exec dbms_stats.gather_table_stats(null, 't3')

-- Deliberately wrong order (FBI after gather stats) - the virtual columns created for this FBI don't have statistics, see below
create index t2_idx on t2 (case when id2 = 1 then id2 else 1 end, case when id2 = 2 then id2 else 1 end, filler, id);

create index t3_idx on t3 (id, filler, id2);
And the following execution plan (all results are from 12.1.0.2 but should be applicable to other versions, too):
----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 10000 |  1416K|   132   (0)| 00:00:01 |
|*  1 |  HASH JOIN          |      | 10000 |  1416K|   132   (0)| 00:00:01 |
|   2 |   TABLE ACCESS FULL | T2   | 10000 |   292K|    44   (0)| 00:00:01 |
|*  3 |   HASH JOIN         |      | 10000 |  1123K|    88   (0)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| T3   | 10000 | 70000 |    44   (0)| 00:00:01 |
|   5 |    TABLE ACCESS FULL| T1   | 10000 |  1054K|    44   (0)| 00:00:01 |
----------------------------------------------------------------------------
How long would you expect it to run to return all rows (no tricks like expensive regular expressions or user-defined PL/SQL functions)?

Probably should take just a blink, given the tiny tables with just 10000 rows each.

However, these are the runtime statistics for a corresponding execution:
|            |            |            |
|            |DATABASE    |CPU         |
|DURATION    |TIME        |TIME        |
|------------|------------|------------|
|+0 00:00:23 |+0 00:00:23 |+0 00:00:23 |
|            |            |            |

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Pid | Ord | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     | Execs | A-Rows| PGA   | Start | Dur(T)| Dur(A)| Time Active Graph    | Activity Graph ASH          | Top 5 Activity ASH  |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 |     |   6 | SELECT STATEMENT    |      |       |       |   132 (100)|          |    1  |    1  |    0  |       |       |       |                      |                             |                     |
|*  1 |   0 |   5 |  HASH JOIN          |      | 10000 |  1191K|   132   (0)| 00:00:01 |    1  |    1  | 1401K |     2 |    23 |    22 | ##### ############## | @@@@@@@@@@@@@@@@@@@ (100%)  | ON CPU(22)          |
|   2 |   1 |   1 |   TABLE ACCESS FULL | T2   | 10000 | 70000 |    44   (0)| 00:00:01 |    1  |   10K |       |       |       |       |                      |                             |                     |
|*  3 |   1 |   4 |   HASH JOIN         |      | 10000 |  1123K|    88   (0)| 00:00:01 |    1  |   10K | 1930K |       |       |       |                      |                             |                     |
|   4 |   3 |   2 |    TABLE ACCESS FULL| T3   | 10000 | 70000 |    44   (0)| 00:00:01 |    1  |   10K |       |       |       |       |                      |                             |                     |
|   5 |   3 |   3 |    TABLE ACCESS FULL| T1   | 10000 |  1054K|    44   (0)| 00:00:01 |    1  |   10K |       |       |       |       |                      |                             |                     |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
How is it possible to burn more than 20 seconds of CPU time with that execution plan?

The actual rows produced correspond pretty much to the estimated cardinalities (except for the final hash join), so that doesn't look suspect at first glance.
What becomes obvious from the SQL Monitoring output is that all the time is spent on the hash join operation ID = 1.

Of course at that point (at the latest) you should tell me off for not having you shown the predicate section of the plan and the corresponding query in first place.

So here is the predicate section and the corresponding query:
Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access(CASE "T1"."ID2" WHEN 1 THEN "T1"."ID2" ELSE 1 END =CASE
              "T2"."ID2" WHEN 1 THEN "T2"."ID2" ELSE 1 END  AND CASE "T1"."ID2" WHEN
              2 THEN "T1"."ID2" ELSE 1 END =CASE "T2"."ID2" WHEN 2 THEN "T2"."ID2"
              ELSE 1 END )
       filter("T3"."ID2"=CASE  WHEN ("T1"."ID">"T2"."ID") THEN
              "T1"."ID" ELSE "T2"."ID" END )
   3 - access("T3"."ID"="T1"."ID")

   
select /*+
           leading(t1 t3 t2)
           full(t1)
           full(t3)
           use_hash(t3)
           swap_join_inputs(t3)
           full(t2)
           use_hash(t2)
           swap_join_inputs(t2)
       */
       t1.*
     --, t3.id2
     --, case when t1.id > t2.id then t1.id else t2.id end
from
       t1
     , t2
     , t3
where
       1 = 1
--
and    case when t1.id2 = 1 then t1.id2 else 1 end = case when t2.id2 = 1 then t2.id2 else 1 end
and    case when t1.id2 = 2 then t1.id2 else 1 end = case when t2.id2 = 2 then t2.id2 else 1 end
--
and    t3.id  = t1.id
and    t3.id2 = case when t1.id > t2.id then t1.id else t2.id end
;
There are two important aspects to this query and the plan: First, the join expression (without corresponding expression statistics) between T1 and T2 is sufficiently deceptive to hide from the optimizer that in fact this produces a cartesian product (mimicking real life multi table join expressions that lead to bad estimates) and second, the table T3 is joined to both T1 and an expression based on T1 and T2, which means that this expression can only be evaluated after the join to T1 and T2.
With the execution plan shape enforced via my hints (but could be a real life execution plan shape preferred by the optimizer) T3 and T1 are joined first, producing an innocent 10K rows row source, which is then joined to T2. And here the accident happens inside the hash join operation:

If you look closely at the predicate section you'll notice that the hash join operation has both, an ACCESS operation and a FILTER operation. The ACCESS operation performs based on the join between T1 and T2 a lookup into the hash table, which happens to be a cartesian product, so produces 10K times 10K rows, and only afterwards the FILTER (representing the T3 to T1/T2 join expression) is applied to these 100M rows, but matching only a single row in my example here, which is what the A-Rows shows for this operation.

So the point is that this excessive work and FILTER throwaway isn't very well represented in the row source statistics. Ideally you would need one of the following two modifications to get a better picture of what is going on:

Either the FILTER operator should be a separate step in the plan, which in theory would then look like this:
---------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     | Execs | A-Rows|
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |       |       |   132 (100)|          |    1  |    1  |
|*  1a|  FILTER              |      | 10000 |  1191K|   132   (0)| 00:00:01 |    1  |    1  |
|*  1b|   HASH JOIN          |      | 10000 |  1191K|   132   (0)| 00:00:01 |    1  |  100M |
|   2 |    TABLE ACCESS FULL | T2   | 10000 | 70000 |    44   (0)| 00:00:01 |    1  |   10K |
|*  3 |    HASH JOIN         |      | 10000 |  1123K|    88   (0)| 00:00:01 |    1  |   10K |
|   4 |     TABLE ACCESS FULL| T3   | 10000 | 70000 |    44   (0)| 00:00:01 |    1  |   10K |
|   5 |     TABLE ACCESS FULL| T1   | 10000 |  1054K|    44   (0)| 00:00:01 |    1  |   10K |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1a- filter("T3"."ID2"=CASE  WHEN ("T1"."ID">"T2"."ID") THEN
              "T1"."ID" ELSE "T2"."ID" END )
   1b- access(CASE "T1"."ID2" WHEN 1 THEN "T1"."ID2" ELSE 1 END =CASE
              "T2"."ID2" WHEN 1 THEN "T2"."ID2" ELSE 1 END  AND CASE "T1"."ID2" WHEN
              2 THEN "T1"."ID2" ELSE 1 END =CASE "T2"."ID2" WHEN 2 THEN "T2"."ID2"
              ELSE 1 END )
   3 - access("T3"."ID"="T1"."ID")
Which would make the excess rows produced by the ACCESS part of the hash join very obvious, but is probably for performance reasons not a good solution, because then the data would have to flow from one operation to another one rather than being processed within the HASH JOIN operator, which means increased overhead.

Or an additional rowsource statistics should be made available:
----------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     | Execs | A-Rows|AE-Rows|
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |       |       |   132 (100)|          |    1  |    1  |    1  |
|*  1 |  HASH JOIN          |      | 10000 |  1191K|   132   (0)| 00:00:01 |    1  |    1  |  100M |
|   2 |   TABLE ACCESS FULL | T2   | 10000 | 70000 |    44   (0)| 00:00:01 |    1  |   10K |   10K |
|*  3 |   HASH JOIN         |      | 10000 |  1123K|    88   (0)| 00:00:01 |    1  |   10K |   10K |
|   4 |    TABLE ACCESS FULL| T3   | 10000 | 70000 |    44   (0)| 00:00:01 |    1  |   10K |   10K |
|   5 |    TABLE ACCESS FULL| T1   | 10000 |  1054K|    44   (0)| 00:00:01 |    1  |   10K |   10K |
----------------------------------------------------------------------------------------------------
Which I called here "Actually evaluated rows" and in addition to this case here of combined ACCESS and FILTER operations could also be helpful for other FILTER cases, for example even for simple full table scan to see how many rows were evaluated, and not only how many rows matched a possible filter (what A-Rows currently shows).

In a recent OTN thread this topic came up again, and since I also came across this phenomenon a couple of times recently I thought to put this note together. Note that Martin Preiss has submitted a corresponding database idea on the OTN forum.

Expanding on this idea a bit further, it could be useful to have an additional "Estimated evaluated rows (EE-Rows)" calculated by the optimizer and shown in the plan. This could also be used to improve the optimizer's cost model for such cases, because at present it looks like the optimizer doesn't consider additional FILTER predicates on top of ACCESS predicates when calculating the CPU cost of operations like HASH JOINs.

Note that this problem isn't specific to HASH JOIN operations, you can get similar effects with other join methods, like NESTED LOOP joins, or even simple INDEX lookup operations, where again the ACCESS part isn't very selective but only the FILTER applied afterwards filters matching rows.

Here are some examples with the given setup:
select /*+
           leading(t1 t3 t2)
           full(t1)
           full(t3)
           use_hash(t3)
           swap_join_inputs(t3)
           index(t2)
           use_nl(t2)
       */
       t1.*
     --, t3.id2
     --, case when t1.id > t2.id then t1.id else t2.id end
from
       t1
     , t2
     , t3
where
       1 = 1
--
and    case when t1.id2 = 1 then t1.id2 else 1 end = case when t2.id2 = 1 then t2.id2 else 1 end
and    case when t1.id2 = 2 then t1.id2 else 1 end = case when t2.id2 = 2 then t2.id2 else 1 end
--
and    t3.id  = t1.id
and    t3.id2 = case when t1.id > t2.id then t1.id else t2.id end
;

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Pid | Ord | Operation           | Name   | Rows  | Bytes | Cost (%CPU)| Time     | Execs | A-Rows| PGA   | Start | Dur(T)| Dur(A)| Time Active Graph    | Activity Graph ASH          | Top 5 Activity ASH  |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 |     |   6 | SELECT STATEMENT    |        |       |       | 10090 (100)|          |    1  |    1  |       |       |       |       |                      |                             |                     |
|   1 |   0 |   5 |  NESTED LOOPS       |        | 10000 |  1416K| 10090   (1)| 00:00:01 |    1  |    1  |       |       |       |       |                      |                             |                     |
|*  2 |   1 |   3 |   HASH JOIN         |        | 10000 |  1123K|    88   (0)| 00:00:01 |    1  |   10K | 1890K |       |       |       |                      |                             |                     |
|   3 |   2 |   1 |    TABLE ACCESS FULL| T3     | 10000 | 70000 |    44   (0)| 00:00:01 |    1  |   10K |       |       |       |       |                      |                             |                     |
|   4 |   2 |   2 |    TABLE ACCESS FULL| T1     | 10000 |  1054K|    44   (0)| 00:00:01 |    1  |   10K |       |       |       |       |                      |                             |                     |
|*  5 |   1 |   4 |   INDEX RANGE SCAN  | T2_IDX |     1 |    30 |     1   (0)| 00:00:01 |   10K |    1  |       |     3 |    33 |    32 |  ################### | @@@@@@@@@@@@@@@@@@@ (100%)  | ON CPU(32)          |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("T3"."ID"="T1"."ID")
   5 - access(CASE "T1"."ID2" WHEN 1 THEN "T1"."ID2" ELSE 1 END
              ="T2"."SYS_NC00004$" AND CASE "T1"."ID2" WHEN 2 THEN "T1"."ID2" ELSE 1
              END ="T2"."SYS_NC00005$")
       filter("T3"."ID2"=CASE  WHEN ("T1"."ID">"T2"."ID") THEN
              "T1"."ID" ELSE "T2"."ID" END )

select /*+
           leading(t1 t3 t2)
           full(t1)
           full(t3)
           use_hash(t3)
           swap_join_inputs(t3)
           index(t2)
           use_nl(t2)
       */
       max(t1.filler)
     --, t3.id2
     --, case when t1.id > t2.id then t1.id else t2.id end
from
       t1
     , t2
     , t3
where
       1 = 1
--
and    case when t1.id2 = 1 then t1.id2 else 1 end = case when t2.id2 = 1 then t2.id2 else 1 end
and    case when t1.id2 = 2 then t1.id2 else 1 end = case when t2.id2 = 2 then t2.id2 else 1 end
--
and    t3.id  = t1.id
and    t2.filler >= t1.filler
and    t2.id = case when t1.id2 > t3.id2 then t1.id2 else t3.id2 end
;

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Pid | Ord | Operation            | Name   | Rows  | Bytes | Cost (%CPU)| Time     | Execs | A-Rows| PGA   | Start | Dur(T)| Dur(A)| Time Active Graph    | Activity Graph ASH          | Top 5 Activity ASH  |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 |     |   7 | SELECT STATEMENT     |        |       |       | 20092 (100)|          |    1  |    1  |       |       |       |       |                      |                             |                     |
|   1 |   0 |   6 |  SORT AGGREGATE      |        |     1 |   223 |            |          |    1  |    1  |       |       |       |       |                      |                             |                     |
|   2 |   1 |   5 |   NESTED LOOPS       |        |     1 |   223 | 20092   (1)| 00:00:01 |    1  |   10K |       |       |       |       |                      |                             |                     |
|*  3 |   2 |   3 |    HASH JOIN         |        | 10000 |  1123K|    88   (0)| 00:00:01 |    1  |   10K | 1900K |       |       |       |                      |                             |                     |
|   4 |   3 |   1 |     TABLE ACCESS FULL| T3     | 10000 | 70000 |    44   (0)| 00:00:01 |    1  |   10K |       |       |       |       |                      |                             |                     |
|   5 |   3 |   2 |     TABLE ACCESS FULL| T1     | 10000 |  1054K|    44   (0)| 00:00:01 |    1  |   10K |       |       |       |       |                      |                             |                     |
|*  6 |   2 |   4 |    INDEX RANGE SCAN  | T2_IDX |     1 |   108 |     2   (0)| 00:00:01 |   10K |   10K |       |     2 |    34 |    34 | #################### | @@@@@@@@@@@@@@@@@@@ (100%)  | ON CPU(34)          |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("T3"."ID"="T1"."ID")
   6 - access(CASE "T1"."ID2" WHEN 1 THEN "T1"."ID2" ELSE 1 END
              ="T2"."SYS_NC00004$" AND CASE "T1"."ID2" WHEN 2 THEN "T1"."ID2" ELSE 1
              END ="T2"."SYS_NC00005$" AND "T2"."FILLER">="T1"."FILLER" AND
              "T2"."ID"=CASE  WHEN ("T1"."ID2">"T3"."ID2") THEN "T1"."ID2" ELSE
              "T3"."ID2" END  AND "T2"."FILLER" IS NOT NULL)
       filter("T2"."ID"=CASE  WHEN ("T1"."ID2">"T3"."ID2") THEN
              "T1"."ID2" ELSE "T3"."ID2" END )
The former one exhibits exactly the same problem as the HASH JOIN example, only that the FILTER is evaluated in the inner row source of a NESTED LOOP join after the index access operation.

The latter one shows as variation the classic partial "index access" due to a range comparison in between - although the entire expression can be evaluated on index level, the access part matches every index entry, so the range scan actually needs to walk the entire index at each loop iteration and the FILTER is then applied to all the index values evaluated.

Friday, April 10, 2015

Video Tutorial: XPLAN_ASH Active Session History - Part 5

The next part of the video tutorial explaining the XPLAN_ASH Active Session History functionality continuing the actual walk-through of the script output.

More parts to follow.