tag:blogger.com,1999:blog-5124641802818980374.post8215305166527544602..comments2023-09-06T11:58:30.752+02:00Comments on Oracle related stuff: Parallel DML - Conventional (non-direct-path) Inserts As SelectUnknownnoreply@blogger.comBlogger15125tag:blogger.com,1999:blog-5124641802818980374.post-85191288735722224112016-03-07T14:59:31.861+01:002016-03-07T14:59:31.861+01:00Thank you very much for answer and explanation. Ma...Thank you very much for answer and explanation. MarekAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-66104248940749861732016-03-05T12:40:01.650+01:002016-03-05T12:40:01.650+01:00Hi Marek,
I think you will be out of luck with th...Hi Marek,<br /><br />I think you will be out of luck with the multi-table insert. Obviously the "parallel conventional load" code path has only been added to the single table insert, but not the multi-table insert.<br /><br />You haven't mentioned a version, but I've checked both 11.2.0.4 and 12.1.0.2, and both show the same behaviour - add the NOAPPEND hint to the multi-table insert and Parallel DML is gone.<br /><br />Thanks for pointing this out.<br /><br />Note that I've simplified your test case by creating a table T3 as source, in order to rule out that the parallel execution of the ALL_TABLES view has any influence on the result.<br /><br />By the way, the multi-table insert comes with other oddities: Its parallel plan when using Parallel DML still uses an unnecessary PX SEND ROUND-ROBIN distribution (something that was already addressed for the single-table insert in 11.2), and in previous versions the optimizer didn't provide an "Outline" hint set for plan stability (seems to be fixed at least in 11.2.0.4 and 12.1.0.2).<br /><br />RandolfRandolfhttps://www.blogger.com/profile/13463198440639982695noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-9136870165075670382016-03-04T21:55:02.510+01:002016-03-04T21:55:02.510+01:00Hello, my name is Marek and would like to find out...Hello, my name is Marek and would like to find out whether it is a way to make PARALLEL MULTI-TABLE insert to use conventional instead of direct-path with NOAPPEND hint?<br /><br />I do see it works fine with single table insert, however I can't make do the same for multi-table insert. I would be very grateful for any advice. Here is sample code I used:<br /><br />drop table t1;<br />drop table t2;<br /><br />create table t1 (owner varchar2(50), table_name varchar2(50));<br />create table t2 (owner varchar2(50), table_name varchar2(50));<br /><br />alter session force parallel dml parallel 8;<br /><br />-- multi table insert doesn't seem to run parallel anymore when NOAPPEND hint is added<br />insert /*+ parallel(t2 8) noappend */ all<br />when owner = 'SYS' then into t1 (owner, table_name) values(owner, table_name)<br />when owner= 'SYSTEM' then into t2 (owner, table_name) values(owner, table_name)<br />select /*+ parallel(8) */ owner, table_name<br />from all_tables;<br /><br />-- regular insert insert still inserts in parallel when NOAPPEND hint is added<br />insert /*+ parallel(8) noappend */ into t1<br />select /*+ parallel(8) */ owner, table_name<br />from all_tables<br />where owner = 'SYS';<br /> Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-80954473790321650012015-06-30T18:01:30.807+02:002015-06-30T18:01:30.807+02:00The same behaviour is still observed in 12.1.0.2 -...The same behaviour is still observed in 12.1.0.2 - you can perform multiple inserts with PARALLEL NOAPPEND without getting ORA-12838.<br /><br />Another thing to note, although the session statistic 'DML statements parallelized' does not increment for subsequent executions, the statistic 'DML Parallelized' in V$PQ_SESSTAT does increment. This view must be counting its PDML from a different source.<br /><br /><br />DanDan Jankowskihttp://perfstat.wordpress.comnoreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-7301262399692809572013-07-03T15:54:00.008+02:002013-07-03T15:54:00.008+02:00As per my knowledge parallel write is similar to A...<i>As per my knowledge parallel write is similar to APPEND i.e Direct path write. And after direct path write if we want to select we should get the ora-12838 error.</i><br /><br />Hm, have you read above article? Starting from 11g on a parallel insert doesn't have to be direct path - pointing this out is the whole purpose of the article you've commented on.<br /><br />So if you follow above article it looks like your first INSERT was serial, not parallel, as otherwise you should have got the ORA-12838 error, even in case of conventional parallel DML.<br /><br />You haven't mentioned a version, nor have you shown what you've did to create your objects / previously in your session, so a reproducible test case would be great to understand why you get the ORA-12838 on second attempt after the rollback.<br /><br />At least you could show us the real execution plans (using DBMS_XPLAN.DISPLAY_CURSOR) of those two INSERTs - another possibility is that the usage of ROWNUM in the first query somehow lead to a (partial) serial execution plan.<br /><br />RandolfRandolfhttps://www.blogger.com/profile/13463198440639982695noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-23182699052606752862013-07-02T18:04:22.934+02:002013-07-02T18:04:22.934+02:00Hi,
Good day. Hope you are fine and doing well.
...Hi,<br /><br />Good day. Hope you are fine and doing well.<br /><br />As per my knowledge parallel write is similar to APPEND i.e Direct path write. And after direct path write if we want to select we should get the ora-12838 error.<br /><br />Can you please explain me the below strange behavior of ORA-12838.<br /><br />SNEHASISH @ sdat >insert /*+ PARALLEL(EMP,4) */ into empl EMP<br /> 2 select 2,owner from all_objects where rownum < 30;<br /><br />29 rows created.<br /><br />Elapsed: 00:00:00.26<br />SNEHASISH @ sdat >select * from empl;<br /><br /> EMPID|ENAME<br />----------|----------------------------------------------------------------<br /> 123|Tuna<br /> 231|Abhimaniu<br /> 342|Chandru<br /> 435|Bibhu<br /> 1|SYS<br /> 1|SYS<br /> 1|PUBLIC<br /> 1|SYS<br /><br />SNEHASISH @ sdat > ROLLBACK;<br /><br />SNEHASISH @ sdat >insert /*+ PARALLEL(EMP,4) */ into empl EMP<br /> 2 select * from empl;<br /><br />46 rows created.<br /><br />Elapsed: 00:00:00.01<br />SNEHASISH @ sdat >select * from empl;<br />select * from empl<br />*<br />ERROR at line 1:<br />ORA-12838: cannot read/modify an object after modifying it in parallel<br /><br /><br />Elapsed: 00:00:00.01<br />SNEHASISH @ sdat ><br /><br /><br /><br />Why is it not giving error in the first scenario.<br /><br />Thanks and Regards,<br />Snehasish Das Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-84497719685852102432012-01-05T21:46:08.131+01:002012-01-05T21:46:08.131+01:00Hi Marco,
> but v$transaction shows 1 as used_...Hi Marco,<br /><br />> but v$transaction shows 1 as used_ublk for all the sessions involved.<br /><br />if you refer to the initial output shown in the post from 10.2.0.5 with parallel direct-path insert: The CREATE INDEX command was commented out, so when the direct-path insert was performed there was no index to maintain, hence no UNDO was generated.<br /><br />The text quoted doesn't correspond to the output - it was merely meant as a heads-up to remind of this fact.<br /><br />If you refer to some experiment you've done yourself with indexes enabled you probably would need to show the steps that you've performed.<br /><br />RandolfRandolfhttps://www.blogger.com/profile/13463198440639982695noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-92105755946816686322012-01-05T17:50:07.012+01:002012-01-05T17:50:07.012+01:00Hi,
after first example of parallel dml with appen...Hi,<br />after first example of parallel dml with append hint you say:<br /><br />Note that if there are usable indexes on the segment defined then the insert will maintain them. This index maintenance will generate undo and redo even if the data segment has been marked as and loaded using NOLOGGING. This holds true, by the way,even if the indexes are also marked as NOLOGGING, since that will only be applicable to index creation and rebuilds, but not index maintenance. If the amount of data loaded into the segment is significant it can be more efficient to set the index segments to unusable before loading the data and rebuild the index segments afterwards.<br /><br />but v$transaction shows 1 as used_ublk for all the sessions involved.<br /><br />Can you explain it to me?<br />thanks.Marcohttps://www.blogger.com/profile/02077032701274028360noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-42377199689746524402011-12-28T20:56:55.325+01:002011-12-28T20:56:55.325+01:00> By the way, i found a test case about my prob...> By the way, i found a test case about my problem but I am not sure to understand why the NOCACHE was problematic<br /><br />There are a couple of things to consider:<br /><br />1. If you want to scale parallel DML/DDL with LOBs you need to have multiple partitions involved - you can only have a single parallel slave working on each partition (so the parallel granule is partition not block ranges)<br /><br />2. According to the post the problem with NOCACHE was not the NOCACHE per se but the NOLOGGING option. By default every NOLOGGING operation required to update the controlfile to set the unrecoverable SCN information. <br /><br />You can prevent that using event 10359 or in the latest releases using the DB_UNRECOVERABLE_SCN_TRACKING parameter (but look at bug 12360160)<br /><br />3. Even if you consider all that NOCACHE LOBs perform direct writes, so depending on your I/O configuration (async/direct I/O etc.) and I/O subsystem this might still significantly slow down your operation<br /><br />4. Also check the potential wastage caused by out-of-line LOBs and the CHUNK/BLOCK SIZE size used<br /><br />RandolfRandolfhttps://www.blogger.com/profile/13463198440639982695noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-60453316975995300082011-12-27T23:13:21.642+01:002011-12-27T23:13:21.642+01:00Thanks Randolf, after mentionning this.. I Read Th...Thanks Randolf, after mentionning this.. I Read The FM .. so it is now clear why I encounter this behaviour.<br /><br />In the documentation, they say it is possible to do INSERT or CTAS in parallel for Partitioned Tables, i guess it is for the destination table ? I will test tomorrow..<br /><br />By the way, i found a test case about my problem but I am not sure to understand why the NOCACHE was problematic :<br />http://oraclequest.wordpress.com/2011/03/02/high-performance-batch-lob-insert/<br /><br />http://oraclequest.wordpress.com/2011/03/02/high-performance-batch-lob-insert-part-2-subpartitions/<br /><br />I'll leave my result tomorrow..<br />Thansk again.<br />Christophejchttps://www.blogger.com/profile/11021890730078704441noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-17606599295908835492011-12-27T20:46:12.973+01:002011-12-27T20:46:12.973+01:00> Have you ever noticed that PDML was serialize...> Have you ever noticed that PDML was serialized<br /><br />Hi JC,<br /><br />there are a couple of officially documented restrictions to parallel DML, in particular <a href="http://docs.oracle.com/cd/E11882_01/server.112/e25523/parallel003.htm#VLDBG1455" rel="nofollow">here</a> and <a href="http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_9014.htm#SQLRF01604" rel="nofollow">here</a>.<br /><br />LOBs are one of those mentioned there for intra-partition parallel DML.<br /><br />RandolfRandolfhttps://www.blogger.com/profile/13463198440639982695noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-79352235028847930502011-12-27T17:33:11.387+01:002011-12-27T17:33:11.387+01:00Hello Randolf
Have you ever noticed that PDML was...Hello Randolf<br /><br />Have you ever noticed that PDML was serialized (no Parallel Insert) when there was one LOB column to be inserted.<br /><br />I spent the whole day to try to find why my INSERT was not done in parallel..!!<br /><br />So after spotting that, i did a small test and without the LOB my insert is done in parallel.<br />With the LOB column it is serialized.<br /><br />Good to know..jchttps://www.blogger.com/profile/11021890730078704441noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-9686551382376755942011-03-01T01:55:27.064+01:002011-03-01T01:55:27.064+01:00Parallel conventional (NOAPPEND) insert was an 11g...Parallel conventional (NOAPPEND) insert was an 11g new feature, though it seems to have escaped the new features list in the docs.<br /><br />It was added to support cases where parallel insert as select was desired, but the restrictions associated with direct path inserts were not desired.Greg Rahnhttp://structureddata.org/noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-52820814700308420472011-02-27T13:34:08.612+01:002011-02-27T13:34:08.612+01:00Greg,
thanks for the comment regarding my inaccur...Greg,<br /><br />thanks for the comment regarding my inaccurate description of the temp segment merge operation and the exaggerated space wastage of my particular demo setup, much appreciated.<br /><br />By the way, the point of this post is not about performance of a parallel DML insert but about demonstrating the principle of conventional parallel DML inserts, which seem to be new in Oracle 11g.<br /><br />May be you can shed some light on the question why and for what purpose it has been added?<br /><br />RandolfRandolfhttps://www.blogger.com/profile/13463198440639982695noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-74290410952206269632011-02-26T22:16:46.749+01:002011-02-26T22:16:46.749+01:00"By the default every parallel slave of the o..."By the default every parallel slave of the operation will use its own temporary segment for loading"<br /><br />This is only true for inserts into non-partitioned tables. Partitioned table inserts use HWM brokered loads.<br /><br />Also, I think your example uses too few rows in combination with too high of DOP (looks like DOP=8) and as a result you see a large space bloat with PDML inserts. If you used more rows you would see less space bloat. With only ~50k rows/~700 blocks, assuming 8k blocks, thats less than 1MB per slave. Not only that, but your noparallel CTAS is already sub-second, so it's a poor case for Parallel Execution to start with.Greg Rahnhttp://structureddata.org/noreply@blogger.com