Abstract
This dissertation focuses on either understanding and detecting threats to the epistemology of science (chapters 1-6) or making practical advances to remedy epistemological threats (chapters 7-9).
Chapter 1 reviews the literature on responsible conduct of research, questionable research practices, and research misconduct.
Chapter 2 reanalyzes Head et al (2015) their claims about widespread p-hacking for robustness.
Chapter 3 examines 258,050 test results across 30,710 articles from eight high impact journals to investigate the existence of a peculiar prevalence of $p$-values just below .05 (i.e., a bump) in the psychological literature, and a potential increase thereof over time.
Chapter 4 examines evidence for false negatives in nonsignificant results throughout psychology, gender effects, and the Reproducibility Project: Psychology.
Chapter 5 describes a dataset that is the result of content mining 167,318 published articles for statistical test results reported according to the standards prescribed by the American Psychological Association (APA).
In Chapter 6, I test the validity of statistical methods to detect fabricated data in two studies.
Chapter 7 tackles the issue of data extraction from figures in scholarly publications.
In Chapter 8 I argue that "after-the-fact" research papers do not help alleviate issues of access, selective publication, and reproducibility, but actually cause some of these threats because the chronology of the research cycle is lost in a research paper. I propose to give up the academic paper and propose a digitally native "as-you-go" alternative.
In Chapter 9 I propose a technical design for this.
Chapter 1 reviews the literature on responsible conduct of research, questionable research practices, and research misconduct.
Chapter 2 reanalyzes Head et al (2015) their claims about widespread p-hacking for robustness.
Chapter 3 examines 258,050 test results across 30,710 articles from eight high impact journals to investigate the existence of a peculiar prevalence of $p$-values just below .05 (i.e., a bump) in the psychological literature, and a potential increase thereof over time.
Chapter 4 examines evidence for false negatives in nonsignificant results throughout psychology, gender effects, and the Reproducibility Project: Psychology.
Chapter 5 describes a dataset that is the result of content mining 167,318 published articles for statistical test results reported according to the standards prescribed by the American Psychological Association (APA).
In Chapter 6, I test the validity of statistical methods to detect fabricated data in two studies.
Chapter 7 tackles the issue of data extraction from figures in scholarly publications.
In Chapter 8 I argue that "after-the-fact" research papers do not help alleviate issues of access, selective publication, and reproducibility, but actually cause some of these threats because the chronology of the research cycle is lost in a research paper. I propose to give up the academic paper and propose a digitally native "as-you-go" alternative.
In Chapter 9 I propose a technical design for this.
Original language | English |
---|---|
Qualification | Doctor of Philosophy |
Supervisors/Advisors |
|
Award date | 17 Apr 2020 |
Place of Publication | s.l. |
Publisher | |
DOIs | |
Publication status | Published - 2020 |