Democratic Underground Latest Greatest Lobby Journals Search Options Help Login
Google

Calling statistics DUers: I have a statistics question

Printer-friendly format Printer-friendly format
Printer-friendly format Email this thread to a friend
Printer-friendly format Bookmark this thread
This topic is archived.
Home » Discuss » The DU Lounge Donate to DU
 
T Roosevelt Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Aug-11-05 02:38 PM
Original message
Calling statistics DUers: I have a statistics question
Here's what I have:

I'm running some computer simulations to test various alternatives, and each one can take many hours. I'm looking for a way to reduce the number of runs I have to make.

I have a "do nothing" scenario with n=13 runs, which sets a target mean and standard deviation. I'm testing 11 different alternatives, each with 6 variations, or essentially 66 different alternatives, looking for the one(s) that have lower means than the "do nothing" scenario.

Each alternative requires many replications (I'm doing 10 per), so you can see that the amount of computer time can really add up (say 66*10*4 ~ 2640 computer hours!).

My question is: is there a technique I can use where after a few runs on an alternative, I can statistically eliminate it because its mean will never be less than the "do nothing"? (or that the probability of it being less is so small that it can be discarded).

I know I can compare the alternative with the "do nothing" using a t-test (because the samples are <30), but I don't believe this covers me statistically (since I'd be basing it on 3-4 samples).

I am also aware of the non-parametric sign test, and the Wilcoxin test, though because these compare to a single mean, I don't know that they would be valid (since the "do nothing" has a deviation as well).

I'm also going to check with the college's stats profs, but this is semester break, and nobody's home...
Printer Friendly | Permalink |  | Top
jody Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Aug-11-05 02:49 PM
Response to Original message
1. If you are using a computer program to generate random events, have
you designed the experiment so that (a) you use a different seed for each experiment, i.e. sample, (b) designed the experiment so different types of events each have their own random number stream, and (c) tested the random number stream over the relevant segment you use for your experiment and determined they are reasonably random?
Printer Friendly | Permalink |  | Top
 
T Roosevelt Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Aug-11-05 03:20 PM
Response to Reply #1
2. Yes
The randomness of the seeds is not the issue. The problem is that I have the "do nothing" scenario, which establishes a target mean (and deviation). I'm looking for an alternative that is statistically lower than the "do nothing", but some of the alternatives, after only a few runs, are significantly higher. I don't want to waste computer time running more replications for these alternative scenarios.

So what statistical technique can I use that says, OK, this alternative (with only this small sample of runs), regardless of the number of replications, is never going to have a mean that is lower than the "do nothing"?
Printer Friendly | Permalink |  | Top
 
jody Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Aug-11-05 03:37 PM
Response to Reply #2
3. I don't know the details of your experiment however, if a particular
Edited on Thu Aug-11-05 03:40 PM by jody
run, e.g. sample, consists of 30 or more random replications, then you can use a simple t-test to determine whether the sample mean is equal to or greater than your threshold standard. You would of course use the sample standard deviation for that test. If the sample mean is not greater than your standard, then you can conclude that it's statistically unlikely that further testing would produce results that allow you to conclude that you can reject the null, i.e. no effect.

You can write your simulation so that it tests for your condition after a specific minimum number of replications and use a predetermined critical t value to accept or reject the null where the null hypothesis is that the factor does not affect the results.

My statement is true in general however, you might have some subtle condition with which I am not familiar that might cause me to give different advice.

:hi:
Printer Friendly | Permalink |  | Top
 
DU AdBot (1000+ posts) Click to send private message to this author Click to view 
this author's profile Click to add 
this author to your buddy list Click to add 
this author to your Ignore list Fri Apr 19th 2024, 12:36 PM
Response to Original message
Advertisements [?]
 Top

Home » Discuss » The DU Lounge Donate to DU

Powered by DCForum+ Version 1.1 Copyright 1997-2002 DCScripts.com
Software has been extensively modified by the DU administrators


Important Notices: By participating on this discussion board, visitors agree to abide by the rules outlined on our Rules page. Messages posted on the Democratic Underground Discussion Forums are the opinions of the individuals who post them, and do not necessarily represent the opinions of Democratic Underground, LLC.

Home  |  Discussion Forums  |  Journals |  Store  |  Donate

About DU  |  Contact Us  |  Privacy Policy

Got a message for Democratic Underground? Click here to send us a message.

© 2001 - 2011 Democratic Underground, LLC