Monday March 28, 2011
From the Stat Dept:
John Ioannidis, a relatively recent arrival at Stanford, and well-known
for his citation giant “Why most published research findings are false”
will be giving Tuesday’s Stat Seminar [tomorrow, March 29 2011], the first of the spring series.
Empirical evaluations show that almost all biomedical studies currently are highlighting some statistically significant results with p-values <0.05. However, the large majority of these statistically significant claims fail to get replicated when larger and better studies are conducted. Publication bias is often thought to be the explanation, but as I will discuss, this is likely to account only for a minority of the problem of excess significance. Methods to detect publication and related biases are typically focused on a single meta-analysis at a time. Most of these methods have serious problems at the conceptual and inferential level. The most popular ones, asymmetry tests (regression equivalents of funnel plots) are heavily misused in the literature. When properly used, they may be applicable to about 10% of the current meta-analyses and evidence synthesis. I propose a different approach that tests for excess significance in wider domains or whole fields of research. The concept tries to model the difference in observed versus expected results that pass different thresholds of statistical significance under different assumptions about the true effect sizes. One can use a simplified model where the p=0.05 threshold is considered to be a major attractor, or extend the concept to include more complex dynamics in the generation and publication of evidence, including the Proteus phenomenon and differential preferences for specific results based on what prior results in the field have been. Examples are provided from diverse fields ranging from genetics, brain volume abnormality studies, and clinical trials.