tag:blogger.com,1999:blog-5124641802818980374.comments2023-09-06T11:58:30.752+02:00Oracle related stuffUnknownnoreply@blogger.comBlogger573125tag:blogger.com,1999:blog-5124641802818980374.post-30565257596445574092023-09-06T11:09:44.332+02:002023-09-06T11:09:44.332+02:00And I had actually built a PL/SQL Package to do th...And I had actually built a PL/SQL Package to do the MOVEs and REBUILDs given schema, table_name and/or index_name as parameters.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-84978521075145695462023-03-20T18:53:34.706+01:002023-03-20T18:53:34.706+01:00Really sorry about that Randolf. I was reading cou...Really sorry about that Randolf. I was reading couple of blogs around that time - one from Christian Antognini and I probably messed up. Apologies. Thank you for answering the questions. Vishal Beriwalhttps://www.blogger.com/profile/14355927507462358558noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-27713864928248589832023-02-14T13:55:31.209+01:002023-02-14T13:55:31.209+01:00I have experienced this with 11.2.0.4. The only wa...I have experienced this with 11.2.0.4. The only way I found to fix it was to insert first into a Global Temporary table and then into the target table. My experiments and a discussion on Oracle-L lead me to believe that using a partioned table is the significant feature plus the database being "busy", although the example in this fine post shows that there is probably more than way to trip over this issue. When I turned on trace for event 14403 I found that it occurred for even the most simple "INSERT ... VALUES ..." statement, but in those cases the retry only happened once.<br />William's comment about this re-producing in release 19 is interesting and a bit concerning that Oracle appear to see this issue as "just the way it works".Peter Hitchmannoreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-2326686930306832332022-04-13T20:05:46.386+02:002022-04-13T20:05:46.386+02:00So I am wondering - has this been addressed and ho...So I am wondering - has this been addressed and how?Anonymoushttps://www.blogger.com/profile/06111231008970635276noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-77371843590262796012022-01-13T15:52:20.440+01:002022-01-13T15:52:20.440+01:00You can find here all the exec stats (along with 1...You can find here all the exec stats (along with 10046 tkprof formatted output): https://community.oracle.com/mosc/discussion/comment/16858905#Comment_16858905difhttps://www.blogger.com/profile/15237799825970297015noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-6517470561609086032022-01-13T11:26:16.352+01:002022-01-13T11:26:16.352+01:00Hi,
as already outlined I would start with captur...Hi,<br /><br />as already outlined I would start with capturing the session statistics before / after the run (snapper can do this in "developer mode", but MYSTATS / RUNSTATS packages do this and allow automatic diff of the statistics) - default snapper isn't that useful since it just captures session statistics for a limited time period without diffing them.<br /><br />Regarding your SQL Monitor findings: Obviously the other run does 700K more I/O requests, and these are very likely single block reads which explains why you get a different average I/O size. You could try to find out which blocks these I/O requests are reading (CURRENT_FILE# / CURRENT_BLOCK#), possibly these are ASSM space management blocks and some part of the different behavior might be due to ASSM space management issues. If it was block cleanout related you should see more waits / I/Os on the Undo tablespace but for this to be sure you should enable logical I/O tracing - or check the session statistics as already mentioned.<br /><br />Regards,<br />RandolfRandolfhttps://www.blogger.com/profile/13463198440639982695noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-17499481951458935692022-01-11T13:18:52.166+01:002022-01-11T13:18:52.166+01:00Of course that I have all exec details (AWR, ASH, ...Of course that I have all exec details (AWR, ASH, SQL Monitor, snapper stats); trying to get also 10046 for all the tests.<br /><br />What I have seen strange in SQL Monitoring report (IO section) when comparing executions from 5) and 6): <br /><br />5) Buffer gets: 3.6M<br /> IO requests: 800K<br /> I/O bytes: 7.9GB<br /> Average I/O size: 10.3 KB<br /><br /> select count(1), session_state, event, CURRENT_OBJ#, CURRENT_FILE# from v$active_session_history where sql_id = '8p0x9uc3udpyj' and sql_exec_id = 16777223<br /> group by session_state, event, CURRENT_OBJ#, CURRENT_FILE# order by 1 desc<br /> <br /> COUNT(1) SESSION EVENT CURRENT_OBJ# CURRENT_FILE#<br />---------- ------- ---------------------------------------------------------------- ------------ -------------<br /> 2987 ON CPU 3320288 288<br /> 714 ON CPU 0 261<br /> 41 WAITING cell single block physical read 3320288 288<br /> 41 WAITING cell multiblock physical read 3320288 288<br /> 41 WAITING cell list of blocks physical read 3320288 288<br /> 32 ON CPU 3320495 288<br /> 13 WAITING cell list of blocks physical read 0 261<br /> 3 ON CPU 312 259<br /> 2 WAITING cell list of blocks physical read 3320495 288<br /><br />6) Buffer gets: 3.4M<br /> IO requests: 109K<br /> I/O bytes: 7.4GB<br /> Average I/O size: 70.9 KB<br /><br /> select count(1), session_state, event, CURRENT_OBJ#, CURRENT_FILE# from v$active_session_history where sql_id = 'cauajh6txt23u' and sql_exec_id = 16777219<br /> group by session_state, event, CURRENT_OBJ#, CURRENT_FILE# order by 1 desc<br /><br /> COUNT(1) SESSION EVENT CURRENT_OBJ# CURRENT_FILE#<br />---------- ------- ---------------------------------------------------------------- ------------ -------------<br /> 894 ON CPU 3320288 288<br /> 21 WAITING cell multiblock physical read 3320288 288<br /> 15 ON CPU 3320496 288<br /> 11 WAITING cell single block physical read 3320288 288<br /> 9 WAITING cell single block physical read 3320496 288<br /> 2 ON CPU 0 261<br /><br />What could be the reason to have the avg I/O size higher in the second case (on the table which was populated first with a direct path insert) ?<br /><br />Indeed the table populated with direct path insert is smaller:<br /><br />SEGMENT_NAME SIZE_MB BLOCKS<br />------------------------ ---------- ----------<br />BIGTAB 8692.8125 1112680<br />T_HCC_ARCHIVE_HIGH 5120 655360<br />T_HCC_ARCHIVE_HIGH2 4864 622592<br /><br />Tests were performed on a dev machine having no other workloads.<br />Auto DOP is not used.difhttps://www.blogger.com/profile/15237799825970297015noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-21262265982969351052022-01-10T15:47:03.568+01:002022-01-10T15:47:03.568+01:00Hi,
many possibilities come to my mind, so I woul...Hi,<br /><br />many possibilities come to my mind, so I would try to start with the obvious:<br /><br />- Capture session statistics for those runs - it should give you a first hint what is different. If you run this on a test environment where you have exclusive access, system / instance statistics should also be fine, so you could simply generate an AWR snapshot before and after if you happen to have a Diagnostics Pack license otherwise use Statspack or Snapper / Mystats / Runstats tools on session level (more reliable) freely available<br /><br />- It would be interesting to see if you achieve different compression ratios with these different attempts, I could imagine for example that the quicker runs don't compress the same way the others do - in the past HCC compression was only supported with Direct Path operations, this has changed with 12.2.0.1 where HCC compression is also supported with conventional loads, but I don't have that much experience with that improvement, so not sure how it works in detail<br /><br />- If you make use of Auto DOP you also need to consider that some runs might make use of Parallelism others not even if you don't specify any parallelism explicitly<br /><br />So as usual more details available will help to explain the different behaviour.<br /><br />Regards,<br />RandolfRandolfhttps://www.blogger.com/profile/13463198440639982695noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-77385655076981627352022-01-10T13:49:39.098+01:002022-01-10T13:49:39.098+01:00Hi Randolf,
While doing some test for HCC (in 19c...Hi Randolf,<br /><br />While doing some test for HCC (in 19c), I came up to a scenario that may involve blocks cleanouts (that's how I ended up in this page); it would be great if you can share a thought on this:<br /><br />0. bigtab has 20M rows (12 randomized columns)<br />1. EXEC EXECUTE IMMEDIATE 'drop table t_hcc_archive_high'; EXCEPTION WHEN OTHERS THEN NULL;<br /> EXEC EXECUTE IMMEDIATE 'drop table t_hcc_archive_high2'; EXCEPTION WHEN OTHERS THEN NULL;<br />2. CREATE TABLE t_hcc_archive_high COMPRESS FOR ARCHIVE HIGH AS SELECT * FROM bigtab WHERE 1 = 0;<br /> CREATE TABLE t_hcc_archive_high2 COMPRESS FOR ARCHIVE HIGH AS SELECT * FROM bigtab WHERE 1 = 0; <br />3. INSERT INTO t_hcc_archive_high select * from bigtab; commit; <br /> --76 minutes to insert 20M rows in empty t_hcc_archive_high<br />4. INSERT /*+ append */ INTO t_hcc_archive_high2 select * from bigtab; commit; <br /> --20 minutes to insert 20M rows in empty t_hcc_archive_high2 <br />5. INSERT INTO t_hcc_archive_high select * from bigtab; commit; <br /> --76 minutes to insert 20M rows on top of already existing 20M rows<br />6. INSERT INTO t_hcc_archive_high2 select * from bigtab; commit; <br /> --16 minutes to insert 20M rows on top of already existing 20M rows<br /><br />Couldn't find any explanation on why the second conventional insert takes just 16 minutes (no 6); the test was reproduced multiple times and the results are consistent.<br /><br />Any hints ?<br /><br />Regards <br />difhttps://www.blogger.com/profile/15237799825970297015noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-41701637211003880762020-09-26T18:24:15.466+02:002020-09-26T18:24:15.466+02:00Hi Randolf
Notice that the current 19c documentat...Hi Randolf<br /><br />Notice that the current 19c documentation that you reference contains the following sentence "The extension can contain an expression only if a corresponding virtual column has been created."<br />So, they tried to fix it but they did it incorrectly ;-)<br /><br />Cheers,<br />Chris<br /><br />Chris Antogninihttps://www.blogger.com/profile/06817383099389331608noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-65080097467383732982020-08-08T19:00:40.277+02:002020-08-08T19:00:40.277+02:00Also, it didn't seem to need the condition &qu...Also, it didn't seem to need the condition "The DML statement hasn't touched yet one of the partitions". In our case, it was an INSERT with a 9-way join that normally takes around 15 minutes to load several million rows. Most of the tables are partitioned. Any partition/subpartition DDL on any partition of any of the 10 tables caused the v$sql.object_status to change immediately from VALID to INVALID_UNAUTH until the internal rollback completed (which could easily take an hour, depending on how far it had got before being invalidated), when it would go back to VALID and sql_exec_id and sql_exec_start would be reset.William Robertsonhttps://www.blogger.com/profile/15587739711173542698noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-39057625722031608922020-08-08T18:50:07.776+02:002020-08-08T18:50:07.776+02:00Yes it still happens in 12.2 and 19.0, regardless ...Yes it still happens in 12.2 and 19.0, regardless of the CURSOR_INVALIDATION setting or equivalent "deferred invalidation" clause.William Robertsonhttps://www.blogger.com/profile/15587739711173542698noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-57602957562662804712020-05-14T17:25:01.770+02:002020-05-14T17:25:01.770+02:00Hi Vishal,
Randolf here - Christian might be Chri...Hi Vishal,<br /><br />Randolf here - Christian might be Christian Antognini? But that's not his blog :-)<br /><br />Anyway - if you read the blog carefully you'll notice that there are pros and cons having multiple DFO trees - one of the potential drawbacks of having everything in one DFO tree is that you can end up with more "BUFFERED" hash joins.<br /><br />What I'm trying to say here is that this isn't black and white and as so often depends on a lot of details which of the two variants might be more desirable and this can change from case to case.<br /><br />Having said that, to answer your questions:<br /><br />1. There is no direct way to control the number of DFO trees, no matter which version you run. The number of DFO trees depends on the SQL features used and the execution plan generated by the optimizer. You do have some indirect control by replacing one feature with another (for example don't use materialized WITH clauses, don't use certain kind of Analytic Functions, don't make use of ROWNUM etc.)<br /><br />2. The 1 SLAVE feature described here isn't backported to pre-12c as far as I know. You need at least a 12c optimizer code to get support for that feature (which means using for example OPTIMIZER_FEATURES_ENABLE = 11.2.0.4 in 12c and later will prevent the feature).<br /><br />RandolfRandolfhttps://www.blogger.com/profile/13463198440639982695noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-86539970513899571762020-05-13T20:28:47.641+02:002020-05-13T20:28:47.641+02:00Hi Christian
Couple of questions here.
1. As you...Hi Christian<br /><br />Couple of questions here.<br /><br />1. As you said, multiple DFO trees could be undesirable. Is there a way we can control the number of DFO trees on 11.2.0.4 ?<br /><br />2. And about this 1SLAVE new feature which addresses the multiple DFO problem. Is there a fix for this that could be backported to 11.2.0.4 ?<br />Vishal Beriwalhttps://www.blogger.com/profile/14355927507462358558noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-29978510471745136622020-04-20T16:19:40.587+02:002020-04-20T16:19:40.587+02:00Hello Randolf,
thanks a lot for the excellent art...Hello Randolf, <br />thanks a lot for the excellent article, it helped a lot troubleshooting a "long parsing time" issue last week. <br />I wrote a little note about this issue, maybe you want to check: https://medium.com/@andischloegl/long-parse-due-to-skew-detection-in-hybrid-hash-distribution-dfd643f9b91bAndreas Schloeglhttps://twitter.com/AndiSchloeglnoreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-56052951269664559692020-03-26T08:54:41.326+01:002020-03-26T08:54:41.326+01:00Hi Vishal,
apologies for the delayed reply - I di...Hi Vishal,<br /><br />apologies for the delayed reply - I didn't see you comment and only found it today.<br /><br />I wouldn't recommend modifying global settings in that case. The first step is determining why you see the difference in behaviour. It could just be a side effect of different execution plans due to changes in estimates.<br /><br />If you say you see uneven distribution between parallel slaves, how did you determine that? You could use my XPLAN_ASH script to visualize the uneven distribution which would allow gaining more understanding.<br /><br />Depending on the outcome you can then find out what is the most sensible measure.<br /><br />Best regards,<br />RandolfRandolfhttps://www.blogger.com/profile/13463198440639982695noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-22850217231277987672020-03-09T16:33:47.160+01:002020-03-09T16:33:47.160+01:00Hi Randolf
I have a question if you could help. W...Hi Randolf<br /><br />I have a question if you could help. We recently upgraded from 11.2 to 18c and our reporting queries are running longer. I found that there is uneven distribution between the parallel slaves. One slave does all the work. <br />I believe that has something to do with the PX new features. <br /><br />I don't want to set OFE to 11.2.0.4 because covers 100's of parameters and fix controls.<br /><br />What are the list of parameters on 18c that I can turn off - to get the 11.2 like behavior ( only the px related parameters ) <br /><br />--<br />Best Regards<br />Vishal<br />Vishal Beriwalhttps://www.blogger.com/profile/14355927507462358558noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-68043225009287492052019-08-20T17:59:37.730+02:002019-08-20T17:59:37.730+02:00Hi Dominic,
good point. I don't have too much...Hi Dominic,<br /><br />good point. I don't have too much experience in that regard with 12.2 and later, so, yes, I agree, in theory the feature should reduce the impact - but can't say anything about real-life experience, so time will tell when more clients are on 12.2+.<br /><br />Kind regards,<br />RandolfRandolfhttps://www.blogger.com/profile/13463198440639982695noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-87156747296760388822019-08-20T17:34:44.173+02:002019-08-20T17:34:44.173+02:00Hi Randolf,
Presumably 12.2 fine-grained cursor i...Hi Randolf,<br /><br />Presumably 12.2 fine-grained cursor invalidation might reduce some instances of this issue if the partition that was the target of the DDL was not relevant to the cursor in question?<br />https://blogs.oracle.com/optimizer/fine-grained-cursor-invalidation<br /><br />Cheers,<br />DominicDominic Brookshttp://orastory.wordpress.comnoreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-13957029849161799492019-08-07T10:09:13.918+02:002019-08-07T10:09:13.918+02:00XPLAN_ASH works fine for queries which run into th...XPLAN_ASH works fine for queries which run into that ORA-065211 when analyzed with xplan_extended_display_cursor.sql. Thank you very much :-).Robert Ortelnoreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-79400445736360357102019-08-07T09:17:18.183+02:002019-08-07T09:17:18.183+02:00Hi Randolf,
database versions are 12.1.0.2.180717...Hi Randolf,<br /><br />database versions are 12.1.0.2.180717, 18.5 and 18.7 which run into that. I never bothered with the output options and was fine with the default. I will check your comments and see if that helps.<br /><br />PGA for the 12.1.0.2.180717 database is:<br /><br />pga_aggregate_limit 0<br />pga_aggregate_target 90G<br /><br />I will get in contact with you via mail to reduce the exchange here.<br /><br />Regards,<br />RobertRobert Ortelnoreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-30908841829004624362019-08-05T23:42:31.847+02:002019-08-05T23:42:31.847+02:00Hi Robert,
could you mention which Oracle databas...Hi Robert,<br /><br />could you mention which Oracle database version?<br /><br />I'm regularly using XPLAN_ASH which is the newer and more sophisticated version of that script (when using ASH option "NONE" and ALLSTATS [LAST] formatting option XPLAN_ASH will only show Rowsource Statistics but no Active Session History related stuff), and it works for me even for extremely complex execution plans (up to several thousand of operations / lines). I've never encountered the mentioned error so far, up to and including version 18c. So maybe you can provide more details, like exact version used and maybe PGA memory settings etc.<br /><br />RandolfRandolfhttps://www.blogger.com/profile/13463198440639982695noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-31323948576920306282019-08-01T13:34:05.838+02:002019-08-01T13:34:05.838+02:00Hi Randolf,
very very nice, but runs into ORA-065...Hi Randolf,<br /><br />very very nice, but runs into ORA-06522: CANNOT ALLOCATE ENOUGH MUTABLE AREA for complex executions plans as it seems and only runs fine as long as the execution plan is simple enough to some unknown degree. Unfortunatly oerr and oracle support are not of great help understanding what that error actually means.<br /><br /> <br /><br />oerr:<br />ORA-06522: string <br /> Cause: ORA-06520 or ORA-065211 could stack this error with a system specific error string. <br /> Action: This error string should give the cause for errors ORA-06520 or ORA-065211 <br /><br />oracle support:<br />This error is due to the script that is provided and has to do with the associated data types <br />An object's mutability is determined by its type. ... Other objects like integers, floats, strings and tuples are objects that can not be changed. <br />So there is an issue with the script overflowing the object type in one or more areas.<br /><br /><br />Maybe it helps you to understand and maybe fix it.<br /><br /><br />RobertRobert Ortelnoreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-63759692688787152162018-04-08T14:29:18.699+02:002018-04-08T14:29:18.699+02:00Hi Amit,
just check for my XPLAN_ASH script that ...Hi Amit,<br /><br />just check for my XPLAN_ASH script that I've published as well, it adds all that, but requires the Diagnostic Pack license for Active Session History (and Tuning Pack license for Real Time SQL Monitoring) support.<br /><br />RandolfRandolfhttps://www.blogger.com/profile/13463198440639982695noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-75363834568698378382018-03-26T18:30:25.268+02:002018-03-26T18:30:25.268+02:00This came at the perfect time. Building up a new ...This came at the perfect time. Building up a new RAC system and would like to get benchmark numbers to know what the system is capable.<br /><br />Looks very user friendly.<br /><br />Thanks!chrishttps://www.blogger.com/profile/16604681679092564761noreply@blogger.com