Search  

by Merete Monrad
12th September 2024

Do you have to express your emotions in a particular way to be heard as a legitimate participant? How do norms for feelings spread via digital media and does artificial intelligence promote particular norms for expressing emotions? Approaching conversational AI as inherently political in that it implicitly articulates norms and values, our recent study examines the norms encoded in the technology.

Chatbots use machine learning based on vast amounts of data and, as such, reflect current norms, human biases and prejudice by picking up patterns in the data on which they are trained. Because the resulting discrimination is implicitly built into the model, it is hard to identify, making it important to examine and question the biases and social norms promoted by AI.

Conversational AI (re)produces specific social norms or ‘manners’ for proper conduct. Querying a chatbot about appropriate and inappropriate anger, specific ‘feeling rules’ infuse the advice of a seemingly neutral chatbot. Hochschild used the notion ‘feeling rules’ to describe ‘the social rules according to which a feeling is or is not deemed appropriate to a situation’. While ‘feeling rules’ can derive from local communities and practices in situated social interaction, they are also affected by translocal and global norms. The increasing use of digital technologies in everyday life makes it relevant to examine what ‘feeling rules’ are encoded in these technologies, particularly as their high level of use and possibilities for intense personal engagement carry the potential for norms to be disseminated globally.

To examine the ‘feeling rules’ communicated to users, I asked ChatGPT a range of different questions about what constitutes appropriate and inappropriate anger, as well as when and how anger should be expressed. Across almost all chatbot responses to anger, the arousal or passion of anger is cast as problematic: it can lead to illness, damage social relationships, result in violence and impede productivity and communication. ChatGPT draws relatively simple distinctions between appropriate (calm private talk) and inappropriate (aggressive, demeaning and violent) anger. However, once we start querying ChatGPT on how to manage anger, the responses become far more elaborate, revealing anger to be the object of detailed regulation. Here, ChatGPT ties anger closely to health. In the various responses to what constitutes a healthy way of managing anger, ChatGPT draws on individual cognitive and bodily strategies for diminishing the arousal of anger: take time to cool down, practise deep breathing, use humour and positive self-talk, engage in physical activity and meditation, listen to music, calmly assess the situation, talk to a friend or seek therapy. People are encouraged to identify the triggers of their emotions, control their expressions keep a calm demeanour and communicate feelings clearly and effectively. When anger is connected to health, this aligns with the dominant contemporary discourse of self-management, which reflects the neoliberal idea of the ideal citizen who takes personal responsibility and exercises self-control. At the same time, the connection to health depoliticises anger and disconnects it from public life and issues of social justice and protest.

The responses of ChatGPT do not reflect repression of anger; rather, they call for specific ways of expressing anger, primarily using communicative strategies devoid of arousal. The discursive mix drawn on by ChatGPT seems to reflect a contemporary ‘problem’ with regard to anger: one should be in touch with and express authentic emotions, but at the same time, one should be able to control and communicate anger without passion. Effective communication is at the core of the norms for anger articulated by ChatGPT. This involves expressing concerns in a calm, assertive, direct and constructive manner. However, the necessity of remaining calm is not only about communication; it is also about productivity: Emotions should be managed in such a way as not to create ‘unproductive behaviour’ (ChatGPT).

According to the chatbot, we are well advised to control our anger, and make sure we express our grievances in a calm, productive, respectful and constructive manner. These ‘feeling rules’ are political in the sense that they may serve to uphold the status quo and hence the interests of elites and advantaged populations, making it easy to cast aside the anger of the disadvantaged and dispossessed as too loud, aggressive or shrill to constitute legitimate participation. The struggle over appropriate and inappropriate emotions is also a struggle over power and status. The ‘feeling rules’ implied in the responses of the chatbot reflect a neoliberal conception of self as individually responsible, productive, self-regulating, emotionally competent and able to find solutions. For instance, when responding to questions regarding anger at work, ChatGPT suggests a solution-oriented approach, being constructive, not venting to colleagues and keeping the stakes of your career in mind. While ChatGPT recommends involving HR or mental health professionals if the anger remains unresolved, references to workers’ rights, unions or collective protests are absent.

The emotion management promoted by ChatGPT seems to move anger in three directions: privatisation (rather than collectivisation), de-escalation (rather than mobilising energy to overcome obstacles) and solution orientation (obligating the angry individual to collaborate in finding constructive solutions). While these movements may dampen interpersonal conflict, they may also gloss over structural inequalities and the need for collective responses. These shifts seem to channel anger from a disruptive, mobilising source, challenging the status quo to practices of aligning modes of collaboration that maintain the status quo. The seemingly neutral advice of the chatbot potentially depoliticises anger, disciplines people to remain productive and respectful and narrows the scope of anger expressions that are deemed acceptable.

Obviously, AI did not make up these ‘feeling rules’; they are based on the extensive data underlying chatbot responses and as such provide a condensed overview of the contemporary conventions we are likely to encounter within the online English-speaking public. However, it remains to be seen whether AI will contribute to spreading ‘feeling rules’ reflecting particular classed positions of advantage, thereby quietly delegitimising the anger of disadvantaged groups.

Merete Monrad, PhD, is associate professor at Aalborg University, Denmark. Her research is focused on emotions, temporality, and user perspectives on welfare. She is currently studying anger in contemporary society with a special focus on anger in the encounter between welfare state and citizens.

 

Feeling rules in artificial intelligence: norms for anger management by Merete Monrad is available to read open access on Bristol University Press Digital here.

Bristol University Press/Policy Press newsletter subscribers receive a 25% discount on our books – sign up here.

Follow Transforming Society so we can let you know when new articles publish.

The views and opinions expressed on this blog site are solely those of the original blog post authors and other contributors. These views and opinions do not necessarily represent those of the Bristol University Press and/or any/all contributors to this site.

Image credit: Andrea Cassani via Unsplash