Multidimensional Signals and Analytic Flexibility: Estimating Degrees of Freedom in Human-Speech Analyses

Stefano Coretta, Joseph V. Casillas, Simon Roessig, Michael Franke, Byron Ahn, Ali H. Al-Hoorie, Jalal Al-Tamimi, Najd E. Alotaibi, Mohammed K. AlShakhori, Gabriela M. Benavides, Nicole Benker, Emelia P. BensonMeyer, Nina R. Benway, Grant M. Berry, Liwen Bing, Christina Bjorndahl, Mariška Bolyanatz, Aaron Braver, Alejna Brugos, Erin M. BuchananTanna Butlin, Andrés Buxó-Lugo, Francesco Cangemi, Christopher Carignan, Sita Carraturo, Eleanor Chodroff, Johanna Cronenberg, Olivier Crouzet, Charlotte Dawson, Carissa A. Diantoro, Marie Dokovova, Shiloh Drake, Fengting Du, Margaux Dubuis, Florent Duême, Matthew Durward, Ander Egurtzegi, Mahmoud M. Elsherif, Janina Esser, Emmanuel Ferragne, Lauren K. Fink, Sara Finley, Kurtis Foster, Paul Foulkes, Rosa Franzke, Gabriel Frazer-McKee, Robert Fromont, Jason Geller, Camille L. Grasso, Pia Greca, Martine Grice, Magdalena S. Grose-Hodge, Amelia J. Gully, Caitlin Halfacre, Ivy Hauser, Jen Hay, Robert Haywood, Sam Hellmuth, Allison I. Hilger, Nicole Holliday, Damar Hoogland, Vincent Hughes, Ane Icardo Isasa, Zlatomira G. Ilchovska, Mágat N. Junges, Stephanie Kaefer, Constantijn Kaland, Matthew C. Kelley, Thomas Kettig, Ghada Khattab, Ruud Koolen, Emiel Krahmer, Dorota Krajewska, Andreas Krug, Anna Lander, Tomas O. Lentz, Wanyin Li, Maria Lialiou, Julio Cesar Lopez Otero, Bradley Mackay, Mel Mallard, Carol-Ann Mary McConnellogue, George Moroz, Mridhula Murali, Ladislas Nalborczyk, Filip Nenadić, Jessica Nieder, Heather M. Offerman, Elisa Passoni, Maud Pélissier, Alexandra M. Pfiffner, Michael Proctor, Ryan Rhodes, Elizabeth Roepke, Jan P. Röer, Lucia Sbacco, Rebecca Scarborough, Felix Schaeffler, Erik Schleef, Dominic Schmitz, Alexander Shiryaev, Márton Sóskuthy, Malin Spaniol, Joseph A. Stanley, Alyssa Strickler, Alessandro Tavano, Fabian Tomaschek, Benjamin V. Tucker, Rory Turnbull, Kingsley O. Ugwuanyi, Iñigo Urrestarazu-Porta, Emiel van Miltenburg, Natasha Warner, Simon Wehrle, Hans Westerbeek, Seth Wiener, Jane Wottawa, Chenzi Xu, Germán Zárate-Sández, Georgia Zellou, Timo B. Roettger, Ruth M. Altmiller, Pablo Arantes, Angeliki Athanasopoulou, Melissa M. Baese-Berk, George Bailey, Cheman Baira A Sangma, Eleonora J. Beier, Violet A. Brown, Alice M. Brown, Coline Caillol, Tiphaine Caudrelier, Michelle Cohn, Erica L. Dagar, Fernanda Ferreira, Christina García, Yaqian Huang, Hae-Sung Jeon, Jacq Jones, Niamh E. Kelly, Abhilasha A. Kumar, Yanyu Li, Ronaldo M. Lima Jr., Justin J. H. Lo, Bethany MacLeod, Dušan Nikolić, Francisco G. S. Nogueira, Scott J. Perry, Nicole Rodríguez, Ruben Van De Vijver, Kirsten J. Van Engen, Bruce Xiao Wang, Stephen Winters, Sidney G.-J. Wong, Anna Wood, Cong Zhang, Jian Zhu

Research output: Contribution to journalArticleScientificpeer-review

4 Citations (Scopus)

Abstract

Recent empirical studies have highlighted the large degree of analytic flexibility in data analysis that can lead to substantially different conclusions based on the same data set. Thus, researchers have expressed their concerns that these researcher degrees of freedom might facilitate bias and can lead to claims that do not stand the test of time. Even greater flexibility is to be expected in fields in which the primary data lend themselves to a variety of possible operationalizations. The multidimensional, temporally extended nature of speech constitutes an ideal testing ground for assessing the variability in analytic approaches, which derives not only from aspects of statistical modeling but also from decisions regarding the quantification of the measured behavior. In this study, we gave the same speech-production data set to 46 teams of researchers and asked them to answer the same research question, resulting in substantial variability in reported effect sizes and their interpretation. Using Bayesian meta-analytic tools, we further found little to no evidence that the observed variability can be explained by analysts’ prior beliefs, expertise, or the perceived quality of their analyses. In light of this idiosyncratic variability, we recommend that researchers more transparently share details of their analysis, strengthen the link between theoretical construct and quantitative system, and calibrate their (un)certainty in their conclusions.
Original languageEnglish
Pages (from-to)1-29
Number of pages29
JournalAdvances in Methods and Practices in Psychological Science
Volume6
Issue number3
DOIs
Publication statusPublished - 1 Jul 2023

Keywords

  • Crowdsourcing science
  • Data analysis
  • Scientific transparency
  • Speech
  • Acoustic analysis

Fingerprint

Dive into the research topics of 'Multidimensional Signals and Analytic Flexibility: Estimating Degrees of Freedom in Human-Speech Analyses'. Together they form a unique fingerprint.

Cite this