tag:blogger.com,1999:blog-32413693.post884252011864547893..comments2018-03-20T07:33:30.635-07:00Comments on A mathematician at risk: Your n is probably a lot smaller than you thinkAnonymoushttp://www.blogger.com/profile/16796611979115715515noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-32413693.post-20660708191047081662011-06-19T18:35:33.437-07:002011-06-19T18:35:33.437-07:00Thanks Hadil. Sorry I've taken so long to resp...Thanks Hadil. Sorry I've taken so long to respond.<br /><br />I was going to see if I could answer with a gambit of statistical recipes for you to run through that would make you safe but then I realized that I would just be repeating the same mistake that I was complaining about. Plus there are people who did take a lot of stats classes. which I didn't (I just like spouting off), who could do a far better job.<br /><br />Instead I would just say keep in mind what a p-value is really saying. It's saying that if the assumptions or your model are correct then the chance of seeing an effect of this size, when there is no effect in the general population, is very small. So if one of the assumptions of the model is that assumption that your observations are independent then you should test that assumption. Particularly if you are uncomfortable with the confidence that the p-value seems to be giving you. How you do this may involve more creativity than rigor.<br /><br />Personally, I'm always a big fan of segmentation. Do you have extra variables on your observation that you aren't currently using? Like the day of the week, or time of the day, or height of the subject, etc. If so break up your observations into different groups by way of these variables and see if you can't make your effect disappear for all but a couple of groups. This is a data mining technique though so you'll want to see pretty sizable differences in effect between groups to be sure you found something. When you look at the same observations across many dimensions eventually you'll find some outliers just by chance. <br /><br />Alternatively, if your observations are orderable in some way, like by time or location, you can see if your dependent variable is autoregressive at all in your control group or in your experimental group. R has an ARIMA function to test for exactly this (you'll want to play with different parameters but you probably want to see something from a model of the form of (n,0,0) or (0,0,n) where n is small compared to your number of observations). Or you can just do a linear regression with the series against itself with the first observation removed.<br /><br />And of course the gold standard, repeat the whole experiment over again and see if the effect is about the same size as you had before. Even if the effect grows that can be a warning sign that you're p-value is misleading you.<br /><br />I hope that's of some help. In the stuff I tend to look at most effects aren't small. Small effects are usually either big effects happening to a small group within a larger group, or two big effects on two big groups that are cancelling each other out mostly. But then that's probably just observational bias on my part. I'm probably forgetting about the small effects because they aren't as fun.<br /><br />Please share if you end up finding any great tools that helped you with your concerns.Anonymoushttps://www.blogger.com/profile/16796611979115715515noreply@blogger.comtag:blogger.com,1999:blog-32413693.post-73492921541031104302011-05-21T10:06:07.533-07:002011-05-21T10:06:07.533-07:00Hey Steven,
I feel like you were talking to me. ...Hey Steven,<br /><br />I feel like you were talking to me. That noise is exactly what I am afraid of in my study. How can I reduce the noise? I got highly significant p-values (>0.001), but when I look at the actual numbers, I know for a fact that the difference between groups is not clinically significant. I don't know how to explain that.Hadilhttps://www.blogger.com/profile/06522838451758386908noreply@blogger.comtag:blogger.com,1999:blog-32413693.post-12748531091746317202011-05-21T10:05:38.318-07:002011-05-21T10:05:38.318-07:00Hey Steven,
I feel like you were talking to me. ...Hey Steven,<br /><br />I feel like you were talking to me. That noise is exactly what I am afraid of in my study. How can I reduce the noise? I got highly significant p-values (>0.001), but when I look at the actual numbers, I know for a fact that the difference between groups is not clinically significant. I don't know how to explain that.Hadilhttps://www.blogger.com/profile/06522838451758386908noreply@blogger.com