tag:blogger.com,1999:blog-5124641802818980374.post4050481675370386310..comments2023-09-06T11:58:30.752+02:00Comments on Oracle related stuff: Understanding the different modes of System Statistics aka. CPU Costing and the effects of multiple blocksizes - part 1Unknownnoreply@blogger.comBlogger14125tag:blogger.com,1999:blog-5124641802818980374.post-74643322349449678562009-05-05T18:04:00.000+02:002009-05-05T18:04:00.000+02:00John,
I think there is a misunderstanding/confusi...John,<br /><br />I think there is a misunderstanding/confusion and may be my wording was not clear enough:<br /><br />The "adjusted" MBRC is referring to the NOCPU_COSTING aka. traditional I/O-based costing where the MBRC defined by "db_file_multiblock_read_count" (or "_db_file_optimizer_read_count" in 10g and later) is "adjusted" by a fixed factor to cater for the fact that you might not get that large number of blocks at runtime as outlined in the post.<br /><br />It's not about the MBRC that is used as part of the CPU_COSTING I/O calculation. Your example is simply using the formula provided in a different way, so I think we agree on that.<br /><br />Best Regards,<br />RandolfRandolfhttps://www.blogger.com/profile/13463198440639982695noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-32344691215909821932009-05-05T16:33:00.000+02:002009-05-05T16:33:00.000+02:00Randolf,
First, a good post as I was just researc...Randolf,<br /><br />First, a good post as I was just researching the same information myself.<br /><br />Second, I think your formula for the "adjusted MBRC" is wrong. I call it the "estimated MBRC" i.e. the value estimated by the optimizer that will be achieved when the query is run.<br /><br />The actual formula used by the optimizer is:<br />Estimated MBRC = MBRC * SREADTIM / MREADTIM<br /><br />I got this from Christian Antognini's Troubleshooting Oracle Performance book.<br /><br />With an 8 KB block size and MBRC set to 8, and defaults for NOWORKLOAD system statistics, the estimated MBRC is always 3.69.<br /><br />This agrees with your example for 10,000 blocks:<br />10000 / 3.69 = 2710<br />Which is the cost given for your second execution plan with the cpu_costing hint.<br /><br />This carries through to your second post, which has the MREADTIM and SREADTIM values the wrong way around too. I will post a comment on that separately.<br /><br />JohnJohn Bradyhttps://www.blogger.com/profile/08719865814813032071noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-46865585136597727112009-05-04T18:33:00.000+02:002009-05-04T18:33:00.000+02:00@Randolf: Those were in all cases WORKLOAD stats, ...@Randolf: Those were in all cases WORKLOAD stats, manually collected with START/STOP.<br />Regards,<br />UweUwe Küchlerhttps://www.blogger.com/profile/08199596117280621443noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-58744220098152135532009-05-02T21:43:00.000+02:002009-05-02T21:43:00.000+02:00@Coskan:
I'm not sure what you mean by "manual AS...@Coskan:<br /><br />I'm not sure what you mean by "manual ASSM"... As already outlined, in general I would recommend MSSM if there is no specific reason to use ASSM.<br /><br />Some of the reasons why you want to use ASSM are:<br /><br />- You have a highly concurrent application and/or RAC<br /><br />- You don't want to deal with the MSSM details (not really a good argument from my point of view)<br /><br />- You want to use features that have ASSM as prerequisite, e.g. SHRINK segments online, or use BIGFILE tablespaces.<br /><br />The last point might become more and more important in the future since Oracle might add more features that have ASSM as prerequisite.<br /><br />The best thing of course is to test it for your particular requirements to find out which one suits best.<br /><br />@Uwe:<br />And do you know what kind of system statistics are used in some of these 9i databases (NOWORKLOAD, WORKLOAD)? Just out of curiosity.<br /><br />Regards,<br />RandolfRandolfhttps://www.blogger.com/profile/13463198440639982695noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-61353386499201459052009-05-02T17:26:00.000+02:002009-05-02T17:26:00.000+02:00Randolf,
I should have pointed out that most of t...Randolf,<br /><br />I should have pointed out that most of the databases at this site run on 9i, still. It's a bank, after all, which means that usually you aren't on the leading edge. This includes using new features, although nobody there complained about neither the default gathering of system stats nor the nightly schema stats job in 10g... ;-)<br />Thanks for pointing me to Jonathan's article, I'll take a closer look at it.<br /><br />Have a sunny weekend,<br />Uwe (sitting in the garden working on the second edition of the Oracle Survival Guide).Uwe Küchlerhttps://www.blogger.com/profile/08199596117280621443noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-84013628661441295132009-05-02T11:43:00.000+02:002009-05-02T11:43:00.000+02:00Thank you for this brief and clean reply.
I don'...Thank you for this brief and clean reply. <br /><br />I don't see a problem using MSSM too on my machine (except 1-2 more tablespace creation for demos )but when every expert uses MSSM just for simple demos then people might think ASSM is evil. that was the reason I asked the question. <br /><br />Do you suggest manual assm to your clients if they are not tight fisted about disk space?Coskan Gundogarhttps://www.blogger.com/profile/11570869033287689498noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-75560802085996726532009-05-01T22:23:00.000+02:002009-05-01T22:23:00.000+02:00Coskan,
the simple answer to your question regard...Coskan,<br /><br />the simple answer to your question regarding the usage of ASSM in test cases is that by using ASSM the size of the object is slightly larger due to the additional blocks required for managing the segment space.<br /><br />Using MSSM the size is easier to control, so my table here using PCTFREE 99 and generating 10,000 rows results in a table having exactly 10,000 blocks. You would notice that the same table created in an ASSM tablespace has more than 10,000 blocks, in my test database it's approx. 10,130 blocks making the maths more complex when demonstrating the formulas used for cost calculations.<br /><br />That said I know that Oracle recommends ASSM, but if you follow carefully publications about ASSM you'll notice that it has - like all features - pros and cons, and you should be aware of them in order to decide whether ASSM is really beneficial for you or not:<br /><br />Some pros:<br /><br />- Possibly reduced contention in highly concurrent OLTP systems<br />- No special treatment required in RAC environments (MSSM might require manual tuning using multiple freelists or freelist groups and you need to understand how it works to tuned it properly)<br /><br />Some cons:<br /><br />- Affect negatively clustering factor of indexes<br />- Segments are slightly larger, applies in particular to small segments, negligible for larger segments<br />- No manual tuning possible<br />- Relatively new technology (ok, it's there since 9i...) therefore still some bugs, in particular when using non-standard block sizes != 8KB<br /><br />So if you e.g. have a single-instance system that doesn't have to deal with highly concurrent operations using MSSM should just be fine and I don't see an issue in using an old, but mature technology.<br /><br />Regards,<br />RandolfRandolfhttps://www.blogger.com/profile/13463198440639982695noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-4553616823777558802009-05-01T21:56:00.000+02:002009-05-01T21:56:00.000+02:00One question
Why always not ASSM ?
* New datab...One question <br /><br />Why always not ASSM ?<br /><br /><br />* New databases after 2005 <br />* Databases upgraded with export import <br /><br />I think most of them use ASSM and it is Oracles suggested strategy. <br /><br />I personally dont understand why most of known experts not using it on their demonstrations <br /><br />can I learn why ???<br /><br />do you suggest all of your customers not using ASSM ?<br /><br />by the way very nice postcoskan gundogarhttp://coskan.wordpress.comnoreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-63848154853226159422009-04-30T18:08:00.000+02:002009-04-30T18:08:00.000+02:00Uwe,
can you clarify what do you then mean by enc...Uwe,<br /><br />can you clarify what do you then mean by encouraging to use system statistics if you're not using custom system statistics?<br /><br />My point is that in 10g system statistics are enabled by default, so there is not much to "encourage", and in 9i usually only individual WORKLOAD system statistics get used (more on this topic in the remaining parts of the series) and NOWORKLOAD system statistics are only used very rarely.<br /><br />Regarding the alternating system statistics: I remember a comment by Jonathan Lewis <A HREF="http://jonathanlewis.wordpress.com/2007/05/20/system-stats-strategy/" REL="nofollow">here</A> that he has come to the conclusion that he doesn't recommend that anymore, although he liked the idea in the beginning.<br /><br />RandolfRandolfhttps://www.blogger.com/profile/13463198440639982695noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-65662684887291898252009-04-30T17:29:00.000+02:002009-04-30T17:29:00.000+02:00Kerry,
maybe you just polled the wrong DBAs... ;-)...Kerry,<br />maybe you just polled the wrong DBAs... ;-) I fact, I encourage my customers to make use of system statistics where it makes sense. E.g., at an investment bank, system stats are in production for a few years now, although scarce (3 out of about 20 systems I do consulting for). Still not in production are alternating system stats - I'd like to see them in a data warehouse, applied for nightly batches and daily reports. I'm working on it...<br /><br />[a little later]<br />OK, now that I read your clarification: No, I did not tamper with the default stats, neither.<br /><br />Regards,<br />UweUwe Küchlerhttps://www.blogger.com/profile/08199596117280621443noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-22814180946263233382009-04-30T16:26:00.000+02:002009-04-30T16:26:00.000+02:00Kerry,
Jonathan's comment - as always - highlight...Kerry,<br /><br />Jonathan's comment - as always - highlights the core message: The most important part of the system statistics is the ratio between the SREADTIM and MREADTIM values since this primarily drives the cost of the full table scan operation in comparison to index access paths.<br /><br />That is the reason why I tried to explain how the SREADTIM and MREADTIM values are synthesized when using NOWORKLOAD system statistics and how they change when using different "db_file_multiblock_read_count" settings resulting in different cost calculations of the FTS operation. <br /><br />As it can be seen from the formulas used the block size also influences the SREADTIM and MREADTIM calculation, so there are several parameters that determine how the optimizer arrives at these values.<br /><br />So Jonathan is probably right that the defaults assumed are reasonable in most cases and most people won't care as long as everything runs smoothly but if there is a need for troubleshooting understanding how the values are derived can become a crucial part of the analysis.<br /><br />RandolfRandolfhttps://www.blogger.com/profile/13463198440639982695noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-67438538235427662142009-04-30T02:10:00.000+02:002009-04-30T02:10:00.000+02:00Glad to hear you've got more almost ready to go. I...Glad to hear you've got more almost ready to go. I'm looking forward to the rest of the series. <br /><br />I wasn't clear in my statement. My question to the DBAs was whether any of the guys had worked with databases where System Stats had been set to something other than their default values. Although it may well have been misinterpreted, as I wasn't that clear in my question to them either. <br /><br />Anyway, I did get a couple of responses - but for the most part, the answer was a pretty consistent "no, we are not working with databases where Systems Stats have been changed from their default values". <br /><br />One guy reported having worked on a system where System Stats were gathered once per week - but I guess the results must have not changed too dramatically because he said they didn't have any stability issues (that he was aware of) due to the changing System Stats. <br /><br />I did hear Jonathan Lewis make a comment about System Stats at the Hotsos Symposium a month or so ago. He said something to the effect that he had come to the conclusion that leaving them at the defaults seemed to work pretty well. He mentioned something about the calculated ratio between the mreadtime and sreadtime ending up being pretty reasonable and representative for a lot of systems. It was just a comment in passing, so not a lot of detail. <br /><br />Anyway, I do think it's not a well understood feature, so I'll be interested to read the rest of your postings on the topic.<br /><br />KerryKerry Osbornehttps://www.blogger.com/profile/18129210709237272590noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-23038834206548824792009-04-29T01:10:00.000+02:002009-04-29T01:10:00.000+02:00Kerry,
I find the NOWORKLOAD part already quite i...Kerry,<br /><br />I find the NOWORKLOAD part already quite interesting :-), in particular the change in MREADTIM calculation when having different MBRC settings and the obvious significant cost change compared to traditional I/O based costing.<br /><br />Regarding the system statistics usage: The point I wanted to make here is that in 10g you work with system statistics, always, period (except if you downgrade your optimizer_features_enable to 9i or use undocumented parameters).<br /><br />So the DBAs you've asked are either not aware of this or took your question as if they had explicitly gathered system statistics or not.<br /><br />A lot of trouble when migrating from pre-10g to 10g or 11g comes from the fact that people are not aware that without any changes to the application/database the cost calculation of the optimizer is by default significantly different due to the default NOWORKLOAD system statistics, not to mention the default statistics gathering job.<br /><br />To answer your question: The work is already done, it's just that I realized that is was far too much for a single post, so I decided to split it into several ones.<br /><br />Best Regards,<br />RandolfRandolfhttps://www.blogger.com/profile/13463198440639982695noreply@blogger.comtag:blogger.com,1999:blog-5124641802818980374.post-79495782124765652882009-04-29T00:49:00.000+02:002009-04-29T00:49:00.000+02:00Good idea for a series of posts! How long do you t...Good idea for a series of posts! How long do you think it will be before you publish the more interesting scenarios? ;) <br /><br />On a side note, I polled about 20 or so DBA consultants, and so far, none of them have reported working on a system that has set System Stats. They could all be too busy to respond, but I suspect that there are few systems using this feature, even though it's been available for a long time.<br /><br />KerryKerry Osbornehttps://www.blogger.com/profile/18129210709237272590noreply@blogger.com