TY - JOUR
T1 - Confounds and overestimations in fake review detection
T2 - Experimentally controlling for product-ownership and data-origin
AU - Soldner, F.
AU - Kleinberg, B.
AU - Johnson, S.D.
PY - 2022/12/7
Y1 - 2022/12/7
N2 - The popularity of online shopping is steadily increasing. At the same time, fake product reviews are published widely and have the potential to affect consumer purchasing behavior. In response, previous work has developed automated methods utilizing natural language processing approaches to detect fake product reviews. However, studies vary considerably in how well they succeed in detecting deceptive reviews, and the reasons for such differences are unclear. A contributing factor may be the multitude of strategies used to collect data, introducing potential confounds which affect detection performance. Two possible confounds are data-origin (i.e., the dataset is composed of more than one source) and product ownership (i.e., reviews written by individuals who own or do not own the reviewed product). In the present study, we investigate the effect of both confounds for fake review detection. Using an experimental design, we manipulate data-origin, product ownership, review polarity, and veracity. Supervised learning analysis suggests that review veracity (60.26-69.87%) is somewhat detectable but reviews additionally confounded with product-ownership (66.19-74.17%), or with data-origin (84.44-86.94%) are easier to classify. Review veracity is most easily classified if confounded with product-ownership and data-origin combined (87.78-88.12%). These findings are moderated by review polarity. Overall, our findings suggest that detection accuracy may have been overestimated in previous studies, provide possible explanations as to why, and indicate how future studies might be designed to provide less biased estimates of detection accuracy.
AB - The popularity of online shopping is steadily increasing. At the same time, fake product reviews are published widely and have the potential to affect consumer purchasing behavior. In response, previous work has developed automated methods utilizing natural language processing approaches to detect fake product reviews. However, studies vary considerably in how well they succeed in detecting deceptive reviews, and the reasons for such differences are unclear. A contributing factor may be the multitude of strategies used to collect data, introducing potential confounds which affect detection performance. Two possible confounds are data-origin (i.e., the dataset is composed of more than one source) and product ownership (i.e., reviews written by individuals who own or do not own the reviewed product). In the present study, we investigate the effect of both confounds for fake review detection. Using an experimental design, we manipulate data-origin, product ownership, review polarity, and veracity. Supervised learning analysis suggests that review veracity (60.26-69.87%) is somewhat detectable but reviews additionally confounded with product-ownership (66.19-74.17%), or with data-origin (84.44-86.94%) are easier to classify. Review veracity is most easily classified if confounded with product-ownership and data-origin combined (87.78-88.12%). These findings are moderated by review polarity. Overall, our findings suggest that detection accuracy may have been overestimated in previous studies, provide possible explanations as to why, and indicate how future studies might be designed to provide less biased estimates of detection accuracy.
KW - Online
KW - Deception
UR - https://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=wosstart_imp_pure20230417&SrcAuth=WosAPI&KeyUT=WOS:000925063300026&DestLinkType=FullRecord&DestApp=WOS
U2 - 10.1371/journal.pone.0277869
DO - 10.1371/journal.pone.0277869
M3 - Review article
C2 - 36477257
SN - 1932-6203
VL - 17
JO - PLOS ONE
JF - PLOS ONE
IS - 12
M1 - e0277869
ER -