|
Here's what I have:
I'm running some computer simulations to test various alternatives, and each one can take many hours. I'm looking for a way to reduce the number of runs I have to make.
I have a "do nothing" scenario with n=13 runs, which sets a target mean and standard deviation. I'm testing 11 different alternatives, each with 6 variations, or essentially 66 different alternatives, looking for the one(s) that have lower means than the "do nothing" scenario.
Each alternative requires many replications (I'm doing 10 per), so you can see that the amount of computer time can really add up (say 66*10*4 ~ 2640 computer hours!).
My question is: is there a technique I can use where after a few runs on an alternative, I can statistically eliminate it because its mean will never be less than the "do nothing"? (or that the probability of it being less is so small that it can be discarded).
I know I can compare the alternative with the "do nothing" using a t-test (because the samples are <30), but I don't believe this covers me statistically (since I'd be basing it on 3-4 samples).
I am also aware of the non-parametric sign test, and the Wilcoxin test, though because these compare to a single mean, I don't know that they would be valid (since the "do nothing" has a deviation as well).
I'm also going to check with the college's stats profs, but this is semester break, and nobody's home...
|