“It's not that I don't trust the bot, but can it actually help?” Listening to Young Adults’ Experience With the Transparency Designs of an Image-based Sexual Abuse Victim Support AI Chatbot

  • Yuying Tan
  • , Karolien Poels
  • , Heidi Vandebosch
  • , Sara Pabian
  • , Nicola Henry
  • , Alice Witt

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

Image-based sexual abuse (IBSA) is an umbrella term including the acts of nonconsensual taking, creating, or sharing of nude or sexual images, photos or videos. Organizations supporting IBSA victim-survivors have begun integrating AI chatbots into their services. The AI Act emphasizes transparency, requiring AI systems to disclose the use of AI to users. However, experimental research on transparency in chatbot design—such as disclosing bot identity, functionality, and limitations—has yielded mixed findings, and existing studies have not explored users’ in-depth experiences. To address this gap, we conducted semi-structured interviews with young adults (N=26, ages 18–29) to answer two research questions: RQ1: What are young adults’ positive and negative user experiences with a chatbot designed to support victim-survivors of IBSA? RQ2: What are young adults’ positive and negative user experiences with the disclosures of using AI in a chatbot designed to support victim-survivors of IBSA? We found that young adults liked the chatbot because it was easy to use, not embarrassing to talk to, and provided both informational and emotional support. However, they disliked that the information was often too long, sometimes irrelevant or not personalized, and that the emotional support did not meet their expectations compared to a human. Regarding the transparency design, they liked the non-transparent version because it enabled quicker access to support and fostered a greater sense of intimacy. Conversely, they appreciated the transparent version because disclosures such as “made by experts” and “protects your privacy” enhanced their trust and sense of security. However, the limitation disclosure (e.g., “I cannot understand like a real human”) received some negative comments, such as making users feel distant from the chatbot, less confident in its ability, and less inclined to engage further. These findings suggest that while transparency can build trust, opaque or unclear disclosures may undermine user confidence and engagement. Further research is needed to refine communication strategies that balance trust and usability in AI-driven support systems.
Original languageEnglish
Title of host publicationGoodIT '25: Proceedings of the 2025 International Conference on Information Technology for Social Good
PublisherACM
Pages60-69
Number of pages10
Volume5
ISBN (Print)9798400720895
DOIs
Publication statusPublished - 9 Dec 2025
EventGoodIT '25: International Conference on Information Technology for Social Good - Antwerp, Belgium
Duration: 3 Sept 20255 Jan 2026
Conference number: 5
https://goodit2025.idlab.uantwerpen.be/

Conference

ConferenceGoodIT '25
Abbreviated titleGoodIT '25
Country/TerritoryBelgium
CityAntwerp
Period3/09/255/01/26
Internet address

Fingerprint

Dive into the research topics of '“It's not that I don't trust the bot, but can it actually help?” Listening to Young Adults’ Experience With the Transparency Designs of an Image-based Sexual Abuse Victim Support AI Chatbot'. Together they form a unique fingerprint.

Cite this