[go: up one dir, main page]

Academia.eduAcademia.edu

Registered reports for Consciousness and Cognition

2018, Consciousness and Cognition

Consciousness and Cognition xxx (xxxx) xxx–xxx Contents lists available at ScienceDirect Consciousness and Cognition journal homepage: www.elsevier.com/locate/concog Editorial Registered reports for Consciousness and Cognition Consciousness and Cognition is pleased to announce that, beginning 01 January 2018, it will offer authors the option of submissions that follow the Registered Report format (Chambers, Dienes, McIntosh, Rotshtein, & Willmes, 2015; Chambers, Feredoes, Muthukumaraswamy, & Etchells, 2014; Nosek & Lakens, 2014). Like many other journals, Consciousness and Cognition is concerned that some empirical studies in psychological science are based on methods, analyses, or interpretations that undermine good scientific reasoning (e.g., Francis, Tanzman, & Matthews, 2014; Simmons, Nelson, & Simonsohn, 2011). It is our belief that these problems often reflect misunderstandings about statistical analyses, theoretical predictions, or data interpretation. The Registered Report format encourages authors to think more carefully about study design, analysis, and interpretation and thereby promotes better scientific practice. A Registered Report broadly works in the following way. An initial submission to Consciousness and Cognition will include the Introduction and Methods sections of an empirical paper. Importantly, the empirical study should not yet be performed. Following standard practice at Consciousness and Cognition, this document will be evaluated by one of the journal editors to determine whether it should be sent out for peer review. If it is sent out for review, the author will receive comments regarding the motivation and design of the study. Similar to standard peer review, there may be more than one round of peer review where the authors, reviewers, and editor discuss the motivation and design of the study. If the reviewers and editor feel that the study is well motivated and properly designed, then the manuscript will be provisionally accepted, pending completion of the empirical study. When the author finishes the study and writes up the full report, the manuscript will undergo a second peer review process; this time to verify that the study was run as originally planned and to check the reasonableness of the conclusions and interpretations. Deviation from the methods and analyses described in the approved first submission is possible, but such changes must be clearly indicated in the second submission. Deviations that substantively change the methods, analyses, and conclusions may lead to rejection during the second round of peer review. Because the initial submission is without any data, it is important for authors to justify the experiment’s motivation and design. They should describe anticipated effect sizes, appropriate sample sizes, reasons for stimulus selection, all dependent and independent variables, and how the variables are measured. It is advisable that these descriptions are supported by respective references justifying these calculations. They should also explain why the proposed analysis appropriately treats the data relative to the research question of interest. They should describe and justify any criteria for data exclusion and transformations (e.g., to remove skew from the data). In short, the document should make a strong case that the planned experiment will adequately answer the proposed research question. We hasten to point out that in an ideal world such experiment justification would be part of the majority of scientific reports; in this regard the introduction and methods sections of a Registered Report format should be little different from regular submissions. In practice, we find that very few current manuscripts provide adequate justification for the reported experimental design. More often, the justification that an experiment is adequate is based on the finding that the data show the anticipated effect. Assuming authors are being honest and fully reporting their findings, such data contingent justification appears to be partly based on luck, which does not seem viable for the progress of science over the long term. Of course, luck will always be a part of scientific work (if for no other reason than that experiments depend on a random sample), but the Registered Report format should motivate scientists to design experiments that remove the need for it. Consciousness and Cognition does not require authors to utilize any specific style of statistical analysis. Frequentist, Bayesian, model comparison, and machine learning (e.g., cross validation) approaches are all possibilities, provided authors can justify why their chosen analytic method is appropriate to answer the proposed research question. For each method, authors must justify how their planned sample size is sufficient to support the planned analyses. For a frequentist approach this typically means running a power analysis based on target effect sizes. We want to discuss two important points with regard to power analyses. First, it is not http://dx.doi.org/10.1016/j.concog.2017.10.007 1053-8100/ © 2017 Published by Elsevier Inc. Consciousness and Cognition xxx (xxxx) xxx–xxx Editorial sufficient to plan the power analysis for an arbitrary (e.g., “medium”) effect size. If you do not have some reason for picking a specific effect size (or a range/distribution of effect sizes), then you probably should not be preparing a Registered Report submission. Valid reasons for a specific effect size include: the effect size was reported in previous studies, a smallest meaningful effect size can be identified, or you theorize that one effect is a variation of another (known) effect. We suspect other justifications can be generated depending on the details of an investigation. Second, a power analysis must include all tests that are relevant to your research question. If you plan to run a complex ANOVA and support for your hypothesis of interest requires a significant interaction, a significant contrast for some terms, and a non-significant contrast for other terms, then you need to identify sample sizes to have a high probability for all three tests. It is worth noting that, depending on the effect sizes you hypothesize, sometimes it is impossible to get high power for the full set of tests (e.g., Francis, 2016). Typically, this kind of power analysis requires generating simulated data that are sampled from populations having the hypothesized effects and running the appropriate tests to see how often the desired outcomes appear (e.g., Lane & Hennes, in press). Related to what was said above, an additional remark seems justified. A considerable share of the papers published in Consciousness and Cognition represents experimental studies of some typical perceptual/cognitive processes and phenomena. In the majority of journals specializing in these topics quite often the participant sample sizes are relatively small (e.g., from about half a dozen up to a few dozen). However, recent research has indicated that common genetic variability interacts with the quantitative and even qualitative picture of the behavioral functions of perceptual/cognitive performance (e.g., Colzato, van den Wildenberg, & Hommel, 2014; Colzato et al., 2016; Maksimov, Vaht, Harro, & Bachmann, 2015; Maksimov, Vaht, Murd, Harro, & Bachmann, 2015, 2017; Zabelina, Colzato, Beeman, & Hommel, 2016). This means that with small participant samples experimental results can be difficult to replicate, not necessarily because there are errors in the principal design or theory, but because some genetic variants are over- or underrepresented by chance and this distribution of genetic variants may be totally different in a new study carried out in some other lab. Therefore, provided that legal, ethical and/or technical considerations and constraints would not make serious and understandable obstacles, it is advisable – but definitely not compulsory – to carry out genotyping to control for common genetic variability and/or use relatively large participant samples. The content of a Registered Report article need only cover the broad range of topics published by Consciousness and Cognition. This means, among other things, that planned replication studies can be submitted as a Registered Report. As for articles on other topics, a replication study must justify the need for the replication investigation. Consciousness and Cognition does not require authors of Registered Report articles to place a copy of the original submission and accompanying material on a publicly accessible site. We feel that the two-stage process within the journal peer review system provides a similar purpose. However, such postings are not prohibited, and we do encourage authors to share their data and analysis scripts on sites such as the Open Science Framework. Of course, in principle it would be difficult to know whether the submitting authors have already gathered their data and now simply play a game called “Registered Reports”. However, we believe in the integrity of researchers and we feel it is likely that authors would use traditional submission formats with data already at hand – indeed, we suspect many authors will greatly improve their experimental design based on reviewer feedback, which would render any already gathered data moot. The Registered Report format is not a cure-all for scientific practice. It is possible for a Registered Report to contain poor science, and it is possible for excellent science to be done without a Registered Report. In an ideal world, every article would include proper justification for experimental design, data collection, measurement, and analysis. For a variety of reasons, that kind of justification is not common in the present state of the field. We anticipate that the Registered Report format will encourage development and description of such justifications so that they become the norm for scientific reporting. References Chambers, C. D., Dienes, Z., McIntosh, R. D., Rotshtein, P., & Willmes, K. (2015). Registered reports: Realigning incentives in scientific publishing. Cortex, 66, A1–A2. Chambers, C. D., Feredoes, E., Muthukumaraswamy, S. D., & Etchells, P. J. (2014). Instead of “playing the game” it is time to change the rules: Registered Reports at AIMS Neuroscience and beyond. AIMS Neuroscience, 1(1), 4–17. Colzato, L. S., Steenbergen, L., Sellaro, R., Stock, A. K., Arning, L., & Beste, C. (2016). Effects of l-Tyrosine on working memory and inhibitory control are determined by DRD2 genotypes: A randomized controlled trial. Cortex, 82, 217–224. Colzato, L. S., van den Wildenberg, W. P., & Hommel, B. (2014). Cognitive control and the COMT Val158Met polymorphism: Genetic modulation of videogame training and transfer to task-switching efficiency. Psychological Research, 78(5), 670–678. Francis, G. (2016). Implications of “too good to be true” for replication, theoretical claims, and experimental design: An example using prominent studies of racial bias. Frontiers in Psychology. http://dx.doi.org/10.3389/fpsyg.2016.01382. Francis, G., Tanzman, J., & Matthews, W. J. (2014). Excess success for psychology articles in the journal Science. PLoS ONE, 9(12), e114255. http://dx.doi.org/10. 1371/journal.pone.0114255. Lane, S. P., & Hennes, E. P. (2017). Power struggles: Estimating sample size for multilevel relationships research. Journal of Social and Personal Relationships (in press). Maksimov, M., Vaht, M., Harro, J., & Bachmann, T. (2015). Single 5HTR2A-1438 A/G nucleotide polymorphism affects performance in a metacontrast masking task: Implications for vulnerability testing and neuromodulation of pyramidal cells. Neuroscience Letters, 584, 129–134. Maksimov, M., Vaht, M., Murd, C., Harro, J., & Bachmann, T. (2015). Brain dopaminergic system related genetic variability interacts with target/mask timing in metacontrast masking. Neuropsychologia, 71, 112–118. Maksimov, M., Vaht, M., Murd, C., Harro, J., & Bachmann, T. (2017). Variants of TPH2 interact with fast visual processing as assessed by metacontrast. NeuroReport, 28(2), 111–114. Nosek, B. A., & Lakens, D. (2014). Registered reports: A method to increase the credibility of published results. Social Psychology, 45(3), 137–141. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359–1366. Zabelina, D. L., Colzato, L., Beeman, M., & Hommel, B. (2016). Dopamine and the creative mind: Individual differences in creativity are predicted by interactions between dopamine genes DAT and COMT. PLoS ONE, 11(1), e0146768. 2 Consciousness and Cognition xxx (xxxx) xxx–xxx Editorial Gregory Francis Department of Psychological Sciences, Purdue University, United States E-mail address: gfrancis@purdue.edu Talis Bachmann School of Law, Department of Penal Law and Institute of Psychology, University of Tartu, Estonia E-mail address: talis.bachmann@ut.ee 3