Editorial
Instead of “playing the game” it is time to change the rules: Registered Reports at AIMS Neuroscience and beyond
-
Citation: Christopher D. Chambers, Eva Feredoes, Suresh D. Muthukumaraswamy, Peter J. Etchells. Instead of “playing the game” it is time to change the rules: Registered Reports at AIMS Neuroscience and beyond[J]. AIMS Neuroscience, 2014, 1(1): 4-17. doi: 10.3934/Neuroscience.2014.1.4
Related Papers:
-
References
[1] Ioannidis JPA. (2005) Why Most Published Research Findings Are False. PLoS Med 2: e124. doi: 10.1371/journal.pmed.0020124 [2] John LK, Loewenstein G, Prelec D. (2012) Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol Sci 23: 524-532. doi: 10.1177/0956797611430953 [3] Simmons JP, Nelson LD, Simonsohn U. (2011) False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci 22: 359-366. [4] Kerr NL. (1998) HARKing: hypothesizing after the results are known. Pers Soc Psychol Rev 2: 196-217. doi: 10.1207/s15327957pspr0203_4 [5] Makel MC, Plucker JA, Hegarty B. (2012) Replications in Psychology Research: How Often Do They Really Occur? Perspect Psychol Sci 7: 537-542. doi: 10.1177/1745691612460688 [6] Faneli D. (2010) “Positive” Results Increase Down the Hierarchy of the Sciences. PLos One 5: e10068. doi: 10.1371/journal.pone.0010068 [7] Button KS, Ioannidis JP, Mokrysz C, Nosek BA, Flint J, et al. (2013) Power failure: why small sample size undermines the reliability of neuroscience. Nat Rev Neurosci 14: 365-376. doi: 10.1038/nrn3475 [8] Wicherts JM, Bakker M, Molenaar D. (2011) Willingness to share research data is related to the strength of the evidence and the quality of reporting of statistical results. PLoS One 6: e26828. doi: 10.1371/journal.pone.0026828 [9] Cohen J. (1962) The statistical power of abnormal-social psychological research: a review. J Abnorm Soc Psychol 65: 145-153. doi: 10.1037/h0045186 [10] Sterling TD. (1959) Publication Decisions and their Possible Effects on Inferences Drawn from Tests of Significance—or Vice Versa. J Am Stat Assoc 54: 30-34. [11] de Groot AD. (2014) The meaning of "significance" for different types of research [translated and annotated by Eric-Jan Wagenmakers, Denny Borsboom, Josine Verhagen, Rogier Kievit, Marjan Bakker, Angelique Cramer, Dora Matzke, Don Mellenbergh, and Han L. J. van der Maas]. Acta Psychol (Amst) 148: 188-194. doi: 10.1016/j.actpsy.2014.02.001 [12] Nosek BA, Spies JR, Motyl M. (2012) Scientific Utopia : II. Restructuring Incentives and Practices to Promote Truth Over Publishability. Perspect Psychol Sci 7: 615-631. [13] Ioannidis JPA. (2012) Why Science Is Not Necessarily Self-Correcting. Perspect Psychol Sci 7: 645-654. doi: 10.1177/1745691612464056 [14] Chambers CD. (2013) Registered reports: a new publishing initiative at Cortex. Cortex 49: 609-610. doi: 10.1016/j.cortex.2012.12.016 [15] Wolfe J. (2013) Registered Reports and Replications in Attention, Perception, & Psychophysics. Atten Percept Psycho 75: 781-783. doi: 10.3758/s13414-013-0502-5 [16] Stahl C. (2014) Experimental psychology: toward reproducible research. Exp Psychol 61: 1-2. doi: 10.1027/1618-3169/a000257 [17] Munafo MR, Strain E. (2014) Registered Reports: A new submission format at Drug and Alcohol Dependence. Drug Alcohol Depend 137: 1-2. doi: 10.1016/j.drugalcdep.2014.02.699 [18] Nosek BA, Lakens D. (in press) Registered reports: A method to increase the credibility of published results. Soc Psychol. [19] Chambers CD, Munafo MR. (2013) Trust in science would be improved by study pre-registration. The Guardian:http://www. theguardian. com/science/blog/2013/jun/2005/trust-in-science-study-pre-registration. [20] Scott SK. (2013) Will pre-registration of studies be good for psychology? : https://sites. google. com/site/speechskscott/SpeakingOut/willpre-registrationofstudiesbegoodforpsychology. [21] Scott SK. (2013) Pre-registration would put science in chains. Times Higher Education: http://www. timeshighereducation. co. uk/comment/opinion/science-in-chains/2005954. article. [22] Rouder J, Speckman P, Sun D, Morey R, Iverson G. (2009) Bayesian t tests for accepting and rejecting the null hypothesis. Psychon Bull Rev 16: 225-237. doi: 10.3758/PBR.16.2.225 [23] Wagenmakers EJ. (2007) A practical solution to the pervasive problems of p values. Psychon Bull Rev 14: 779-804. doi: 10.3758/BF03194105 [24] Dienes Z. (2011) Bayesian Versus Orthodox Statistics: Which Side Are You On? Perspect Psychol Sci 6: 274-290. doi: 10.1177/1745691611406920 [25] Mathieu S, Chan AW, Ravaud P. (2013) Use of trial register information during the peer review process. PLoS One 8: e59910. doi: 10.1371/journal.pone.0059910 [26] Gelman A, Loken E. (2014) The garden of forking paths: Why multiple comparisons can be a problem, even when there is no fishing expedition" or "p-hacking" and the research hypothesis was posited ahead of time. Unpublished manuscript: http://www. stat. columbia. edu/~gelman/research/unpublished/p_hacking. pdf. [27] Strube MJ. (2006) SNOOP:a program for demonstrating the consequences of premature and repeated null hypothesis testing. Behav Res Methods 38: 24-27. doi: 10.3758/BF03192746 [28] Fiedler K, Kutzner F, Krueger JI. (2012) The Long Way From α-Error Control to Validity Proper: Problems With a Short-Sighted False-Positive Debate. Perspect Psychol Sci 7: 661-669. doi: 10.1177/1745691612462587 [29] Whelan R, Conrod PJ, Poline JB, Lourdusamy A, Banaschewski T, et al. (2012) Adolescent impulsivity phenotypes characterized by distinct brain networks. Nat Neurosci 15: 920-925. doi: 10.1038/nn.3092 [30] Brembs B, Button K, Munafo M. (2013) Deep impact: unintended consequences of journal rank. Front Hum Neurosci 7: 291. [31] Nelson LD. (2014) Preregistration: Not just for the Empiro-zealots. http://datacoladaorg/2014/01/07/12-preregistration-not-just-for-the-empiro-zealots/. [32] World Medical A (2013) World medical association declaration of helsinki: Ethical principles for medical research involving human subjects. JAMA 310: 2191-2194. doi: 10.1001/jama.2013.281053 -
-
Reader Comments
-
© 2014 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
- Figure 1. The hypothetico-deductive model of the scientific method is compromised by a range of questionable research practices (QRPs; red). Lack of replication impedes the elimination of false discoveries and weakens the evidence base underpinning theory. Low statistical power increases the chances of missing true discoveries and reduces the likelihood that obtained positive effects are real. Exploiting researcher degrees of freedom (p-hacking) manifests in two general forms: collecting data until analyses return statistically significant effects, and selectively reporting analyses that reveal desirable outcomes. HARKing, or hypothesizing after results are known, involves generating a hypothesis from the data and then presenting it as a priori. Publication bias occurs when journals reject manuscripts on the basis that they report negative or undesirable findings. Finally, lack of data sharing prevents detailed meta-analysis and hinders the detection of data fabrication
- Figure 2. The submission pipeline and review criteria for Registered Reports at AIMS Neuroscience. Further details can be found at http://www.aimspress.com/reviewers.pdf