Page 1 2 |

Chuck Coleman 2002-11-27 14:28:41 | Once again, John has done a splendid job of crunching numbers. He ran 144 replications and got an average efficiency of ~2.96% and a standard deviation of ~0.03%. His graph shows that the "rogue" point was just a reasonable extreme value. One can see similar "rogues" at both extremes. Again, I cannot reject the hypothesis that the mean efficiency is 2.95%. In fact, there is a broader point. A confidence interval is the interval within which one has captured the true value of the parameter with the stated probability. I've computed approximate (because of heteroscedasticity) 90%, 95% and 99% confidence intervals as listed below. 90%: [2.91, 3.01] 95%: [2.90, 3.02] 99%: [2.88, 3.04] The upshot of this is that my characterization of the existence of a broad plateau at 2.95% is not accurate. Rather, there is a plateau, which may have a discernable slope (needed for any optimization algorithm) veiled in a thick fog in the vicinity of 2.95%. I don't what kind of accuracy the sponsors want - if they're satisfied with being within .2% of truth, you're OK. If they want greater accuracy, you've got problems. quote: I can understand your motivation, but you need to have a statistically sound basis for whatever smoother you use. Otherwise, your results will be meaningless. To do this right, I suspect you're going to need at least 2 orders greater particles for every design. John has already pointed out that this is feasible. You have good starting points in the best1000 (including the deleted "rogues"). I suggest that you find specialists in statistics and optimization in Oxford to help you. BTW, I've decided to withdraw my computer from this project. It is old, slow and simply overwhelmed. I've tried to use the new manualsend.exe to upload my lone result, but get a "The URLMON.DLL file is linked to missing export SHLWAPI.DLL:UrlGetLocationW." error. The message is so long that it has been truncated "Sorry, no concluding witticism" |