Sunday, October 30, 2011

Auto DOP And Direct-Path Inserts

This is just a short note about one of the potential side-effects of the new Auto Degree Of Parallelism (DOP) feature introduced in 11.2.

If you happen to have Parallel DML enabled in your session along with Auto DOP (and here I refer to the PARALLEL_DEGREE_POLICY = AUTO setting, not LIMITED) then it might take you by surprise that INSERT statements that are neither decorated with a parallel hint nor use any parallel enabled objects can be turned into direct-path inserts.

Now don't get me wrong - I think this is reasonable and in-line with the behaviour so far because you have enabled parallel DML and Auto DOP therefore is eligible to make use of that feature. According to the documentation the default mode of parallel inserts is direct-path, so Auto DOP simply follows the documented behaviour when deciding to use parallel DML. Note that depending on the data volume to insert you can even end up with a serial direct-path insert combined with a parallel query part.

It is just that you need to be aware of the fact that a simple INSERT INTO ... SELECT FROM on serial objects might turn into a direct-path insert.

The main caveat to watch out for is that the direct-path insert won't re-use any space available in the existing blocks of the segment but will always allocate blocks above the current High Water Mark (HWM).

So if you use this feature along with some application logic that deletes rows from a segment then by enabling Auto DOP you might end up with an unreasonable segment growth that can have all kinds of nasty side effects.

Another side effect of this is more obvious: An existing application logic might break because it attempts to re-access the object after the now direct-path insert within the the same transaction which will end up with an "ORA-12838: cannot read/modify an object after modifying it in parallel".

If you still want to make use of parallel DML but need to be able to re-use available space in existing blocks you can try to explicitly specify the NOAPPEND hint that still allows parallel AUTO to be used but will prevent the direct-path insert mode for both serial and parallel inserts - 11g introduced the parallel conventional insert, by the way.

Here is a small test case to demonstrate the behaviour:


set echo on linesize 200 pagesize 0 trimspool on tab off

drop table t;

purge table t;

create table t
as
select
rownum as id
, rpad('x', 100) as vc1
from
dual
connect by level <= 1000000
;

exec dbms_stats.gather_table_stats(null, 'T')

alter session enable parallel dml;

alter session set parallel_degree_policy = manual;

insert into t select * from t where rownum <= 1000;

select * from table(dbms_xplan.display_cursor(null, null));

select count(*) from t;

commit;

alter session set parallel_degree_policy = auto;

insert into t select * from t where rownum <= 1000;

select * from table(dbms_xplan.display_cursor(null, null));

select count(*) from t;

commit;

insert /*+ noappend */ into t select * from t where rownum <= 1000;

select * from table(dbms_xplan.display_cursor(null, null));

select count(*) from t;

commit;


And here is the output I get from 11.2.0.1:


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL>
SQL> drop table t;

Table dropped.

SQL>
SQL> purge table t;

Table purged.

SQL>
SQL> create table t
2 as
3 select
4 rownum as id
5 , rpad('x', 100) as vc1
6 from
7 dual
8 connect by level <= 1000000
9 ;

Table created.

SQL>
SQL> exec dbms_stats.gather_table_stats(null, 'T')

PL/SQL procedure successfully completed.

SQL>
SQL> alter session enable parallel dml;

Session altered.

SQL>
SQL> alter session set parallel_degree_policy = manual;

Session altered.

SQL>
SQL> insert into t select * from t where rownum <= 1000;

1000 rows created.

SQL>
SQL> select * from table(dbms_xplan.display_cursor(null, null));
SQL_ID 95x60r5k6mhka, child number 0
-------------------------------------
insert into t select * from t where rownum <= 1000

Plan hash value: 508354683

---------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 4200 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | TABLE ACCESS FULL | T | 1000K| 101M| 4200 (1)| 00:00:51 |
---------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(ROWNUM<=1000)


20 rows selected.

SQL>
SQL> select count(*) from t;
1001000

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> alter session set parallel_degree_policy = auto;

Session altered.

SQL>
SQL> insert into t select * from t where rownum <= 1000;

1000 rows created.

SQL>
SQL> select * from table(dbms_xplan.display_cursor(null, null));
SQL_ID 95x60r5k6mhka, child number 2
-------------------------------------
insert into t select * from t where rownum <= 1000

Plan hash value: 482288532

-----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 2326 (100)| | | | |
| 1 | LOAD AS SELECT | | | | | | | | |
|* 2 | COUNT STOPKEY | | | | | | | | |
| 3 | PX COORDINATOR | | | | | | | | |
| 4 | PX SEND QC (RANDOM) | :TQ10000 | 1000K| 101M| 2326 (1)| 00:00:28 | Q1,00 | P->S | QC (RAND) |
|* 5 | COUNT STOPKEY | | | | | | Q1,00 | PCWC | |
| 6 | PX BLOCK ITERATOR | | 1000K| 101M| 2326 (1)| 00:00:28 | Q1,00 | PCWC | |
|* 7 | TABLE ACCESS FULL| T | 1000K| 101M| 2326 (1)| 00:00:28 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(ROWNUM<=1000)
5 - filter(ROWNUM<=1000)
7 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: Computed Degree of Parallelism is 2


30 rows selected.

SQL>
SQL> select count(*) from t;
select count(*) from t
*
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel


SQL>
SQL> commit;

Commit complete.

SQL>
SQL> insert /*+ noappend */ into t select * from t where rownum <= 1000;

1000 rows created.

SQL>
SQL> select * from table(dbms_xplan.display_cursor(null, null));
SQL_ID dv79tpggm6q4k, child number 1
-------------------------------------
insert /*+ noappend */ into t select * from t where rownum <= 1000

Plan hash value: 2717876046

------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 2326 (100)| | | | |
| 1 | LOAD TABLE CONVENTIONAL | | | | | | | | |
|* 2 | COUNT STOPKEY | | | | | | | | |
| 3 | PX COORDINATOR | | | | | | | | |
| 4 | PX SEND QC (RANDOM) | :TQ10000 | 1000K| 101M| 2326 (1)| 00:00:28 | Q1,00 | P->S | QC (RAND) |
|* 5 | COUNT STOPKEY | | | | | | Q1,00 | PCWC | |
| 6 | PX BLOCK ITERATOR | | 1000K| 101M| 2326 (1)| 00:00:28 | Q1,00 | PCWC | |
|* 7 | TABLE ACCESS FULL | T | 1000K| 101M| 2326 (1)| 00:00:28 | Q1,00 | PCWP | |
------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(ROWNUM<=1000)
5 - filter(ROWNUM<=1000)
7 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: Computed Degree of Parallelism is 2


30 rows selected.

SQL>
SQL> select count(*) from t;
1003000

SQL>
SQL> commit;

Commit complete.

SQL>


Notice how the insert turns into a direct-path insert with Auto DOP and how the subsequent query fails.

As already mentioned, the automatic conversion to direct-path insert with Auto DOP can only been seen when Parallel DML is enabled in the session.

Monday, October 17, 2011

Volatile Data, Dynamic Sampling And Shared Cursors

For the next couple of weeks I'll be picking up various random notes I've made during the sessions that I've attended at OOW. This particular topic was also a problem discussed recently at one of my clients, so it's certainly worth to be published here.

In one of the optimizer related sessions it was mentioned that for highly volatile data - for example often found in Global Temporary Tables (GTT) - it's recommended to use Dynamic Sampling rather than attempting to gather statistics. In particular for GTTs gathering statistics is problematic because the statistics are used globally and shared across all sessions. But GTTs could have a completely different data volume and distribution per session so sharing the statistics doesn't make sense in such scenarios.

So using Dynamic Sampling sounds like a reasonable advice and it probably is in many such cases.

However, there is still a potential problem even when resorting to Dynamic Sampling. If the cursors based on Dynamic Sampling get shared between sessions then they won't be re-optimized even if a GTT in one session is completely different from the one of the session that created the shared cursor previously.

So you can still end up with shared cursors and execution plans that are inappropriate to share across the different sessions. Using Dynamic Sampling doesn't address this issue. It addresses the issue if the cursors do not get shared, for example if they use literals and these literals differ so that different cursors will be generated based on the text matching.

Here is a simple test case that demonstrates the problem:


drop view v_gtt_dyn;

drop table gtt_dyn;

-- Create a Global Temporary Table with an index on it
create global temporary table gtt_dyn (
id number not null
, vc1 varchar2(100)
, filler varchar2(255)
)
on commit preserve rows
;

create index gtt_dyn_idx on gtt_dyn (id);

-- Create a simple view - it will become obvious later
-- why this has been used
create or replace view v_gtt_dyn as select * from gtt_dyn;

-- Run in Session 1
set echo on timing on

-- Unique value in ID column
insert into gtt_dyn
select
rownum as id
, rpad('x', 100) as vc1
, rpad('y', 255) as filler
from
dual
connect by level <= 10000
;

commit;

alter session set statistics_level = all;

alter session set optimizer_dynamic_sampling = 2;

select * from v_gtt_dyn where id = 10 and rownum > 1;

set linesize 200 pagesize 0 trimspool on tab off

select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));

-- Run in Session 2
set echo on timing on

-- Single value in ID column
insert into gtt_dyn
select
10 as id
, rpad('x', 100) as vc1
, rpad('y', 255) as filler
from
dual
connect by level <= 10000
;

commit;

alter session set statistics_level = all;

alter session set optimizer_dynamic_sampling = 2;

select * from v_gtt_dyn where id = 10 and rownum > 1;

set linesize 200 pagesize 0 trimspool on tab off

select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));


Now this is probably an extreme case of data distribution differences but if you run it you'll see it makes the point obvious: In the second session the data distribution of the GTT is completely different, and although there are no statistics gathered on the GTT and hence Dynamic Sampling gets used to arrive at an execution plan, the plan gets shared in the second session (there is only a child number 0) - but the plan is completely inappropriate for the data distribution of the GTT in the that session, you just need to look at the E-Rows and A-Rows columns of the runtime profile:


SQL> select * from v_gtt_dyn where id = 10 and rownum > 1;

no rows selected

Elapsed: 00:00:00.07
SQL>
SQL> set linesize 200 pagesize 0 trimspool on tab off
SQL>
SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));
SQL_ID bjax3mksw1uza, child number 0
-------------------------------------
select * from v_gtt_dyn where id = 10 and rownum > 1

Plan hash value: 471827990

-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.08 | 1117 |
| 1 | COUNT | | 1 | | 0 |00:00:00.08 | 1117 |
|* 2 | FILTER | | 1 | | 0 |00:00:00.08 | 1117 |
| 3 | TABLE ACCESS BY INDEX ROWID| GTT_DYN | 1 | 1 | 10000 |00:00:00.06 | 1117 |
|* 4 | INDEX RANGE SCAN | GTT_DYN_IDX | 1 | 1 | 10000 |00:00:00.02 | 63 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(ROWNUM>1)
4 - access("ID"=10)

Note
-----
- dynamic sampling used for this statement (level=2)



Imagine a more complex plan with joins and a larger data volume and this is a receipt for disaster.

If this problem cannot be addressed from application side by helping the database to generate different cursors for the different data distributions (for example by simply adding different predicates that don't change the result like 1 = 1, 2 = 2 etc.) then you might be able to handle the issue by using Virtual Private Database (VPD, aka. Row Level Security / RLS, Fine Grained Access Control / FGAC). I've already demonstrated the general approach in the past here, but in this case a slightly more sophisticated approach could make sense.

By adding the following code and RLS policy I can drive Oracle to perform a re-optimization only in those cases where it is appropriate. This limits the damage that the general approach does to the Shared Pool by generating potentially numerous child cursors unconditionally.


create or replace package pkg_rls_force_hard_parse is
function force_hard_parse (in_schema varchar2, in_object varchar2) return varchar2;
end pkg_rls_force_hard_parse;
/

create or replace package body pkg_rls_force_hard_parse is
-- Cache the count in session state
g_cnt number;

function force_hard_parse (in_schema varchar2, in_object varchar2) return varchar2
is
s_predicate varchar2(100);
begin
-- Only execute query once in session
-- Change if re-evaluation is desired
if g_cnt is null then
select
count(*)
into
g_cnt
from
gtt_dyn
where
id = 10
and rownum <= 10;
end if;

-- We end up with exactly two child cursors
-- with the desired different plans
-- These child cursors will be shared accordingly
if g_cnt > 1 then
s_predicate := '1 = 1';
else
s_predicate := '2 = 2';
end if;

return s_predicate;
end force_hard_parse;
end pkg_rls_force_hard_parse;
/

-- CONTEXT_SENSITIVE avoids re-evaluation of policy function at execution time
-- Note however that it doesn't avoid re-evaluation at parse time
exec DBMS_RLS.ADD_POLICY (USER, 'v_gtt_dyn', 'hard_parse_policy', USER, 'pkg_rls_force_hard_parse.force_hard_parse', 'select', policy_type => DBMS_RLS.CONTEXT_SENSITIVE);


Now if you repeat above exercise - ideally with SQL trace enabled to see the additional acitivity caused by the RLS policy - you'll notice that the different sessions will end up with different child cursors and execution plans based on the check made.

Now the reason why the view is in place might become obvious: A RLS policy on the base table would have lead to an infinite recursive execution of the RLS policy function due to the query performed within the function. There are other obvious options how to deal with that, for example storing the RLS policy function in a separate schema with the EXEMPT ACCESS POLICY privilege should also work.

This is the result in the second session now:


SQL> select * from v_gtt_dyn where id = 10 and rownum > 1;

no rows selected

Elapsed: 00:00:00.12
SQL>
SQL> set linesize 200 pagesize 0 trimspool on tab off
SQL>
SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));
SQL_ID bjax3mksw1uza, child number 1
-------------------------------------
select * from v_gtt_dyn where id = 10 and rownum > 1

Plan hash value: 424976618

-----------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.04 | 1003 |
| 1 | COUNT | | 1 | | 0 |00:00:00.04 | 1003 |
|* 2 | FILTER | | 1 | | 0 |00:00:00.04 | 1003 |
|* 3 | TABLE ACCESS FULL| GTT_DYN | 1 | 9288 | 10000 |00:00:00.03 | 1003 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(ROWNUM>1)
3 - filter("ID"=10)

Note
-----
- dynamic sampling used for this statement (level=2)


Notice how a second child cursor has been generated and that the cardinality estimate is now much closer to the reality.

Adaptive Cursor Sharing / Cardinality Feedback

I was curious to see if recent features like Adaptive Cursor Sharing or Cardinality Feedback would be able to solve the issue when using the 11g releases.

Cardinality Feedback (introduced in 11.2) unfortunately doesn't get used in the scenario described here, because Dynamic Sampling disables Cardinality Feedback in the current implementation.

Note that the usage of bind variables also disables Cardinality Feedback for those parts of the plan affected by the bind variables - as described in the Optimizer blog post that can be found here.

So may be Adaptive Cursor Sharing (ACS, introduced in 11.1) can come to rescue in case bind variables get used.

Of course the usage of bind variables increases the probability of cursor sharing in above scenario. As already outlined in a previous note ACS is a "reactive" and "non-persistent" feature, so it will only be able to correct things that have already been going wrong at least once. Furthermore if the ACS information gets aged out of the Shared Pool again mistakes will have to be repeated to get recognized by ACS.

However it is interesting to note that I wasn't able to get ACS working in a slightly modified scenario like this (without the RLS policy in place of course):


-- Session 1
set echo on timing on

set linesize 200 pagesize 0 trimspool on tab off

insert into gtt_dyn
select
rownum as id
, rpad('x', 100) as vc1
, rpad('y', 255) as filler
from
dual
connect by level <= 10000
;

commit;

alter session set statistics_level = all;

variable n1 number

exec :n1 := 10

select * from v_gtt_dyn where id <= :n1 and rownum > 1;

select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));

-- Session 2
set echo on timing on

set linesize 200 pagesize 0 trimspool on tab off

insert into gtt_dyn
select
10 as id
, rpad('x', 100) as vc1
, rpad('y', 255) as filler
from
dual
connect by level <= 10000
;

commit;

alter session set statistics_level = all;

variable n1 number

exec :n1 := 10

select * from v_gtt_dyn where id <= :n1 and rownum > 1;

select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));

-- Second execution to allow ACS potentially kicking in
select * from v_gtt_dyn where id <= :n1 and rownum > 1;

select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));


There are some interesting points to notice when running this example:

1. A cursor that uses non-equal operators like above less or equal together with bind variables usually gets marked as "bind-sensitive" and will be monitored by ACS. But in the above case the cursor was not marked as bind sensitive and hence ACS didn't even bother to monitor

2. Consequently the two sessions share the single child cursor and the problem is not addressed by ACS even in subsequent executions

3. It looks like that again the usage of Dynamic Sampling disables ACS

Looking at the way ACS manages the Cursor Sharing criteria (check V$SQL_CS_SELECTIVITY for example) I see the problem that ACS probably wouldn't support the fact that the same value for the bind variable resulted in a completely different selectivity range.

May be this is an explanation why ACS is not activated for cursors that use Dynamic Sampling - ACS may only be able to cope with different bind value ranges that lead to different selectivities.

So even when using bind variables and 11g with ACS it looks like that only the RLS policy approach allows to address this issue from a database-only side. Ideally the application should be "data-aware" in such cases and help the database accordingly to arrive at reasonable execution plans by actively unsharing the cursors.

Tuesday, October 11, 2011

Parallel Downgrade

There are many reasons why a parallel execution might not run with the expected degree of parallelism (DOP), beginning with running out of parallel slaves (PARALLEL_MAX_SERVERS or PROCESSES reached), PARALLEL_ADAPTIVE_MULTI_USER, downgrades at execution time via the Resource Manager, or the more recent features like PARALLEL_DEGREE_LIMIT or the Auto DOP introduced in Oracle 11.2.

However what do you do if you've already checked all these possibilities but still see a downgrade occurring? You can always enable the parallel execution tracing facility (see for example the MOS document ID 444164.1 "Tracing Parallel Execution with _px_trace. Part I" for details how to use it) via the "_px_trace" parameter in the session, and if you see there that parallel slaves are getting acquired but released again immediately then possibly followed by an error message raised then you might want to have a look at the ancient Profile setting SESSIONS_PER_USER. This setting is probably mostly known and used to limit the number of concurrent sessions that a particular user is able to perform, but it is probably forgotten or mostly unknown that this profile setting also will be respected by the parallel execution and each parallel slave started will count towards this limit. Actually up to Oracle 9.2.0.7 you could end up with an ORA-12805 (parallel query server died unexpectedly) error in such a case rather then seeing a downgrade occurring as described in bug 4041253.

So the next time you see an otherwise unexplainable downgrade or think about using the SESSIONS_PER_USER Profile limit, and the user is supposed to make use of Parallel Execution, consider those implications.

Sample px_trace snippet from 10.2.0.5 when downgrading to serial due to SESSIONS_PER_USER Profile setting:

kxfrSysInfo
DOP trace -- compute default DOP from system info
# instance alive = 1 (kxfrsnins)
kxfrDefaultDOP
DOP Trace -- compute default DOP
# CPU = 4
Threads/CPU = 2 ("parallel_threads_per_cpu")
default DOP = 8 (# CPU * Threads/CPU)
default DOP = 8 (DOP * # instance)
kxfrSysInfo
system default DOP = 8 (from kxfrDefaultDOP())
kxfralo
DOP trace -- requested thread from best ref obj = 8 (from kxfrIsBestRef(
))
kxfralo
threads requested = 8 (from kxfrComputeThread())
kxfralo
adjusted no. threads = 8 (from kxfrAdjustDOP())
kxfralo
about to allocate 8 slaves
kxfrAllocSlaves
DOP trace -- call kxfpgsg to get 8 slaves
kxfpgsg
num server requested = 8
num server requested = 8 KXFPLDBL/KXFPADPT/ load balancing:on adaptive:o
n
kxfpiinfo
inst[cpus:mxslv]
1[4:80]
kxfpclinfo
inst(load:user:pct:fact)aff
1(1:0:100:400)
kxfpAdaptDOP
Requested=8 Granted=8 Target=32 Load=1 Default=8 users=0 sets=1
load adapt num servers requested to = 8 (from kxfpAdaptDOP())
kxfpgsg
getting 1 sets of 8 threads, client parallel query execution flg=0x30
Height=8, Affinity List Size=0, inst_total=1, coord=1
Insts 1
Threads 8
kxfpg1sg
q:000007FF4656C058 req_threads:8 nthreads:8 #inst:1 normal
kxfpg1srv
trying to get slave P000 on instance 1 for q:000007FF4656C058
slave P000 is local
found slave P000 dp:000007FF49682A98 flg:0
kxfpcre1
Creating slave 0 flg:30
free descriptor found dp:000007FF49680318
Allocated slave P000 dp:000007FF49680318 pnum:0 flg:4
Got It. 1 so far.
kxfpg1srv
trying to get slave P001 on instance 1 for q:000007FF4656C058
slave P001 is local
found slave P001 dp:000007FF49682A98 flg:0
kxfpcre1
Creating slave 1 flg:30
free descriptor found dp:000007FF49680398
Allocated slave P001 dp:000007FF49680398 pnum:1 flg:4
Got It. 2 so far.
kxfpg1srv
trying to get slave P002 on instance 1 for q:000007FF4656C058
slave P002 is local
found slave P002 dp:000007FF49682A98 flg:0
kxfpcre1
Creating slave 2 flg:30
free descriptor found dp:000007FF49680418
Allocated slave P002 dp:000007FF49680418 pnum:2 flg:4
Got It. 3 so far.
kxfpg1srv
trying to get slave P003 on instance 1 for q:000007FF4656C058
slave P003 is local
found slave P003 dp:000007FF49682A98 flg:0
kxfpcre1
Creating slave 3 flg:30
free descriptor found dp:000007FF49680498
Allocated slave P003 dp:000007FF49680498 pnum:3 flg:4
Got It. 4 so far.
kxfpg1srv
trying to get slave P004 on instance 1 for q:000007FF4656C058
slave P004 is local
found slave P004 dp:000007FF49682A98 flg:0
kxfpcre1
Creating slave 4 flg:30
free descriptor found dp:000007FF49680518
Allocated slave P004 dp:000007FF49680518 pnum:4 flg:4
Got It. 5 so far.
kxfpg1srv
trying to get slave P005 on instance 1 for q:000007FF4656C058
slave P005 is local
found slave P005 dp:000007FF49682A98 flg:0
kxfpcre1
Creating slave 5 flg:30
free descriptor found dp:000007FF49680598
Allocated slave P005 dp:000007FF49680598 pnum:5 flg:4
Got It. 6 so far.
kxfpg1srv
trying to get slave P006 on instance 1 for q:000007FF4656C058
slave P006 is local
found slave P006 dp:000007FF49682A98 flg:0
kxfpcre1
Creating slave 6 flg:30
free descriptor found dp:000007FF49680618
Allocated slave P006 dp:000007FF49680618 pnum:6 flg:4
Got It. 7 so far.
kxfpg1srv
trying to get slave P007 on instance 1 for q:000007FF4656C058
slave P007 is local
found slave P007 dp:000007FF49682A98 flg:0
kxfpcre1
Creating slave 7 flg:30
free descriptor found dp:000007FF49680698
Allocated slave P007 dp:000007FF49680698 pnum:7 flg:4
Got It. 8 so far.
kxfpqsrls
Release Slave q=0x000007FF4656C058 qr=0x000007FF465615A8 action=1 slave=
0 inst=1
kxfpqsrls
Release Slave q=0x000007FF4656C058 qr=0x000007FF46561D68 action=1 slave=
2 inst=1
kxfpqsrls
Release Slave q=0x000007FF4656C058 qr=0x000007FF46562148 action=1 slave=
3 inst=1
kxfpqsrls
Release Slave q=0x000007FF4656C058 qr=0x000007FF46562CE8 action=1 slave=
4 inst=1
kxfpqsrls
Release Slave q=0x000007FF4656C058 qr=0x000007FF4655FE68 action=1 slave=
5 inst=1
kxfpqsrls
Release Slave q=0x000007FF4656C058 qr=0x000007FF46562ED8 action=1 slave=
6 inst=1
kxfpqsrls
Release Slave q=0x000007FF4656C058 qr=0x000007FF46560058 action=1 slave=
7 inst=1
kxfpg1sg
got 1 servers (sync), returning...
kxfpgsg
serial - too few slaves alloc'd
kxfpqsrls
Release Slave q=0x000007FF4656C058 qr=0x000007FF46561988 action=1 slave=
1 inst=1
kxfplsig
signaling OER(10387) in serial 4609