Abstract
We provide evidence on how workers on an online platform perceive algorithmic versus human recruitment through two incentivized experiments designed to elicit willingness to pay for human or algorithmic evaluation. In particular, we test how information on workers’ performance affects their recruiter choice and whether the algorithmic recruiter is perceived as more or less gender-biased than the human one. We find that workers do perceive human and algorithmic evaluation differently, even though both recruiters are given the same inputs in our controlled setting. Specifically, human recruiters are perceived to be more error-prone evaluators and place more weight on personal characteristics, whereas algorithmic recruiters are seen as placing more weight on task performance. Consistent with these perceptions, workers with good task performance relative to others prefer algorithmic evaluation, whereas those with lower task performance prefer human evaluation. We also find suggestive evidence that perceived differences in gender bias drive preferences for human versus algorithmic recruitment.
Original language | English |
---|---|
Article number | 104420 |
Journal | Research Policy |
Volume | 51 |
Issue number | 2 |
DOIs | |
Publication status | Published - Mar 2022 |
Keywords
- Algorithmic evaluation
- Online experiment
- Online labor market
- Technological change