Findings from 162 researchers in 73 teams testing the same hypothesis with the same data reveal a universe of unique analytical possibilities leading to a broad range of results and conclusions. Surprisingly, the outcome variance mostly cannot be explained by variations in researchers’ modeling decisions or prior beliefs. Each of the 1,261 test models submitted by the teams was ultimately a unique combination of data-analytical steps. Because the noise generated in this crowdsourced research mostly cannot be explained using myriad meta-analytic methods, we conclude that idiosyncratic researcher variability is a threat to the reliability of scientific findings. This highlights the complexity and ambiguity inherent in the scientific data analysis process that needs to be taken into account in future efforts to assess and improve the credibility of scientific work.