T Roosevelt
(1000+ posts)
Send PM |
Profile |
Ignore
|
Thu Aug-11-05 02:38 PM
Original message |
Calling statistics DUers: I have a statistics question |
|
Here's what I have:
I'm running some computer simulations to test various alternatives, and each one can take many hours. I'm looking for a way to reduce the number of runs I have to make.
I have a "do nothing" scenario with n=13 runs, which sets a target mean and standard deviation. I'm testing 11 different alternatives, each with 6 variations, or essentially 66 different alternatives, looking for the one(s) that have lower means than the "do nothing" scenario.
Each alternative requires many replications (I'm doing 10 per), so you can see that the amount of computer time can really add up (say 66*10*4 ~ 2640 computer hours!).
My question is: is there a technique I can use where after a few runs on an alternative, I can statistically eliminate it because its mean will never be less than the "do nothing"? (or that the probability of it being less is so small that it can be discarded).
I know I can compare the alternative with the "do nothing" using a t-test (because the samples are <30), but I don't believe this covers me statistically (since I'd be basing it on 3-4 samples).
I am also aware of the non-parametric sign test, and the Wilcoxin test, though because these compare to a single mean, I don't know that they would be valid (since the "do nothing" has a deviation as well).
I'm also going to check with the college's stats profs, but this is semester break, and nobody's home...
|
jody
(1000+ posts)
Send PM |
Profile |
Ignore
|
Thu Aug-11-05 02:49 PM
Response to Original message |
1. If you are using a computer program to generate random events, have |
|
you designed the experiment so that (a) you use a different seed for each experiment, i.e. sample, (b) designed the experiment so different types of events each have their own random number stream, and (c) tested the random number stream over the relevant segment you use for your experiment and determined they are reasonably random?
|
T Roosevelt
(1000+ posts)
Send PM |
Profile |
Ignore
|
Thu Aug-11-05 03:20 PM
Response to Reply #1 |
|
The randomness of the seeds is not the issue. The problem is that I have the "do nothing" scenario, which establishes a target mean (and deviation). I'm looking for an alternative that is statistically lower than the "do nothing", but some of the alternatives, after only a few runs, are significantly higher. I don't want to waste computer time running more replications for these alternative scenarios.
So what statistical technique can I use that says, OK, this alternative (with only this small sample of runs), regardless of the number of replications, is never going to have a mean that is lower than the "do nothing"?
|
jody
(1000+ posts)
Send PM |
Profile |
Ignore
|
Thu Aug-11-05 03:37 PM
Response to Reply #2 |
3. I don't know the details of your experiment however, if a particular |
|
Edited on Thu Aug-11-05 03:40 PM by jody
run, e.g. sample, consists of 30 or more random replications, then you can use a simple t-test to determine whether the sample mean is equal to or greater than your threshold standard. You would of course use the sample standard deviation for that test. If the sample mean is not greater than your standard, then you can conclude that it's statistically unlikely that further testing would produce results that allow you to conclude that you can reject the null, i.e. no effect.
You can write your simulation so that it tests for your condition after a specific minimum number of replications and use a predetermined critical t value to accept or reject the null where the null hypothesis is that the factor does not affect the results.
My statement is true in general however, you might have some subtle condition with which I am not familiar that might cause me to give different advice.
:hi:
|
DU
AdBot (1000+ posts) |
Fri Apr 19th 2024, 12:36 PM
Response to Original message |