Don't go reading my emotions: affective harm, affective injustice and affective artificial intelligence

    Research output: Contribution to journalArticleScientificpeer-review

    Abstract

    Some AI applications are programmed to recognize people's emotions. These can be used to help or teach people to recognize and interpret the emotions of others, as well as to monitor the performance of customer service workers. Some commentators have identified several risks with these technologies, while others highlight the positive effects of affective AI. It can, for instance, serve as a form of affective and cognitive scaffolding that helps users to recognize their own emotions and those of other people and to regulate them. However, while affective AI can be a useful tool for achieving these purposes, we will argue that the use of these applications risks bringing about two forms of affective harm. The first is the risk of alienation from our own emotions. The second risk is that of emotional imperialism, which occurs when a dominant group imposes its emotional norms and practices on a marginalized group, while marking out the emotional norms of the marginalized as deviant and inferior.
    Original languageEnglish
    Number of pages24
    JournalPhilosophical Psychology
    Early online dateOct 2025
    DOIs
    Publication statusPublished - 26 Oct 2025

    Keywords

    • Affective artificial intelligence
    • Affective injustice
    • Affective scaffolding
    • Autism
    • Emotion recognition
    • Emotional alienation
    • Neurodiversity

    Fingerprint

    Dive into the research topics of 'Don't go reading my emotions: affective harm, affective injustice and affective artificial intelligence'. Together they form a unique fingerprint.

    Cite this