Loading...
Don't go reading my emotions: affective harm, affective injustice and affective artificial intelligence
Goffin,Kris ; Archer,Alfred
Goffin,Kris
Archer,Alfred
Abstract
Some AI applications are programmed to recognize people's emotions. These can be used to help or teach people to recognize and interpret the emotions of others, as well as to monitor the performance of customer service workers. Some commentators have identified several risks with these technologies, while others highlight the positive effects of affective AI. It can, for instance, serve as a form of affective and cognitive scaffolding that helps users to recognize their own emotions and those of other people and to regulate them. However, while affective AI can be a useful tool for achieving these purposes, we will argue that the use of these applications risks bringing about two forms of affective harm. The first is the risk of alienation from our own emotions. The second risk is that of emotional imperialism, which occurs when a dominant group imposes its emotional norms and practices on a marginalized group, while marking out the emotional norms of the marginalized as deviant and inferior.
Description
Date
2025-10-26
Journal Title
Journal ISSN
Volume Title
Publisher
Research Projects
Organizational Units
Journal Issue
Keywords
Affective artificial intelligence, Affective injustice, Affective scaffolding, Autism, Emotion recognition, Emotional alienation, Neurodiversity
Citation
Goffin, K & Archer, A 2025, 'Don't go reading my emotions : affective harm, affective injustice and affective artificial intelligence', Philosophical Psychology. https://doi.org/10.1080/09515089.2025.2571445
