-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Plaza
committed
Mar 28, 2024
1 parent
668a668
commit f41a204
Showing
5 changed files
with
84 additions
and
8 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
@article{plaza2024emotionsurvey, | ||
title={Emotion Analysis in NLP: Trends, Gaps and Roadmap for Future Directions}, | ||
author={Flor Miriam Plaza-del-Arco, Alba Curry, Amanda Cercas Curry, Dirk Hovy}, | ||
journal={arXiv preprint arXiv:2403.01222}, | ||
year={2024}, | ||
abstract = "Emotions are a central aspect of communication. Consequently, emotion analysis (EA) is a rapidly growing field in natural language processing (NLP). However, there is no consensus on scope, direction, or methods. In this paper, we conduct a thorough review of 154 relevant NLP publications from the last decade. Based on this review, we address four different questions: (1) How are EA tasks defined in NLP? (2) What are the most prominent emotion frameworks and which emotions are modeled? (3) Is the subjectivity of emotions considered in terms of demographics and cultural factors? and (4) What are the primary NLP applications for EA? We take stock of trends in EA and tasks, emotion frameworks used, existing datasets, methods, and applications. We then discuss four lacunae: (1) the absence of demographic and cultural aspects does not account for the variation in how emotions are perceived, but instead assumes they are universally experienced in the same manner; (2) the poor fit of emotion categories from the two main emotion theories to the task; (3) the lack of standardized EA terminology hinders gap identification, comparison, and future goals; and (4) the absence of interdisciplinary research isolates EA from insights in other fields. Our work will enable more focused research into EA and a more holistic approach to modeling emotions in NLP." | ||
} |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,69 @@ | ||
--- | ||
# Documentation: https://sourcethemes.com/academic/docs/managing-content/ | ||
|
||
title: "Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution" | ||
authors: ["Flor Miriam Plaza-del-Arco","Amanda Cercas Curry", "Alba Curry", "Gavin Abercrombie", "Dirk Hovy"] | ||
date: 2024-03-28 | ||
doi: "" | ||
|
||
# Schedule page publish date (NOT publication's date). | ||
publishDate: 2024-03-28T17:15:00+01:00 | ||
|
||
# Publication type. | ||
# Legend: 0 = Uncategorized; 1 = Conference paper; 2 = Journal article; | ||
# 3 = Preprint / Working Paper; 4 = Report; 5 = Book; 6 = Book section; | ||
# 7 = Thesis; 8 = Patent | ||
publication_types: ["3"] | ||
|
||
# Publication name and optional abbreviated publication name. | ||
publication: "arXiv" | ||
publication_short: "arXiv" | ||
|
||
abstract: "Large language models (LLMs) reflect societal norms and biases, especially about gender. While societal biases and stereotypes have been extensively researched in various NLP applications, there is a surprising gap for emotion analysis. However, emotion and gender are closely linked in societal discourse. E.g., women are often thought of as more empathetic, while men's anger is more socially accepted. To fill this gap, we present the first comprehensive study of gendered emotion attribution in five state-of-the-art LLMs (open- and closed-source). We investigate whether emotions are gendered, and whether these variations are based on societal stereotypes. We prompt the models to adopt a gendered persona and attribute emotions to an event like 'When I had a serious argument with a dear person'. We then analyze the emotions generated by the models in relation to the gender-event pairs. We find that all models consistently exhibit gendered emotions, influenced by gender stereotypes. These findings are in line with established research in psychology and gender studies. Our study sheds light on the complex societal interplay between language, gender, and emotion. The reproduction of emotion stereotypes in LLMs allows us to use those models to study the topic in detail, but raises questions about the predictive use of those same LLMs for emotion applications." | ||
|
||
# Summary. An optional shortened abstract. | ||
summary: "" | ||
|
||
tags: ["Emotion attribution","Gender Bias","Large Language Models"] | ||
categories: [] | ||
featured: false | ||
|
||
# Custom links (optional). | ||
# Uncomment and edit lines below to show custom links. | ||
# links: | ||
# - name: Follow | ||
# url: https://twitter.com | ||
# icon_pack: fab | ||
# icon: twitter | ||
|
||
url_pdf: https://arxiv.org/pdf/2403.03121.pdf | ||
url_code: | ||
url_dataset: | ||
url_poster: | ||
url_project: | ||
url_slides: | ||
url_source: | ||
url_video: | ||
|
||
# Featured image | ||
# To use, add an image named `featured.jpg/png` to your page's folder. | ||
# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight. | ||
image: | ||
caption: 'Stereotypical model biases in gendered emotion attribution' | ||
focal_point: "Center" | ||
preview_only: false | ||
|
||
# Associated Projects (optional). | ||
# Associate this publication with one or more of your projects. | ||
# Simply enter your project's folder or file name without extension. | ||
# E.g. `internal-project` references `content/project/internal-project/index.md`. | ||
# Otherwise, set `projects: []`. | ||
projects: [integrator] | ||
|
||
# Slides (optional). | ||
# Associate this publication with Markdown slides. | ||
# Simply enter your slide deck's filename without extension. | ||
# E.g. `slides: "example"` references `content/slides/example/index.md`. | ||
# Otherwise, set `slides: ""`. | ||
slides: "" | ||
--- |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,7 +1,7 @@ | ||
@article{rooein2023know, | ||
@article{plaza2024emotionstereotypes, | ||
title={Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution}, | ||
author={Flor Miriam Plaza-del-Arco, Amanda Cercas Curry, Alba Curry, Gavin Abercrombie, Dirk Hovy}, | ||
journal={arXiv preprint arXiv:2403.03121}, | ||
year={2024}, | ||
abstract = "Large language models (LLMs) reflect societal norms and biases, especially about gender. While societal biases and stereotypes have been extensively researched in various NLP applications, there is a surprising gap for emotion analysis. However, emotion and gender are closely linked in societal discourse. E.g., women are often thought of as more empathetic, while men's anger is more socially accepted. To fill this gap, we present the first comprehensive study of gendered emotion attribution in five state-of-the-art LLMs (open- and closed-source). We investigate whether emotions are gendered, and whether these variations are based on societal stereotypes. We prompt the models to adopt a gendered persona and attribute emotions to an event like 'When I had a serious argument with a dear person'. We then analyze the emotions generated by the models in relation to the gender-event pairs. We find that all models consistently exhibit gendered emotions, influenced by gender stereotypes. These findings are in line with established research in psychology and gender studies. Our study sheds light on the complex societal interplay between language, gender, and emotion. The reproduction of emotion stereotypes in LLMs allows us to use those models to study the topic in detail, but raises questions about the predictive use of those same LLMs for emotion applications." | ||
abstract = "Large language models (LLMs) reflect societal norms and biases, especially about gender. While societal biases and stereotypes have been extensively researched in various NLP applications, there is a surprising gap for emotion analysis. However, emotion and gender are closely linked in societal discourse. E.g., women are often thought of as more empathetic, while men's anger is more socially accepted. To fill this gap, we present the first comprehensive study of gendered emotion attribution in five state-of-the-art LLMs (open- and closed-source). We investigate whether emotions are gendered, and whether these variations are based on societal stereotypes. We prompt the models to adopt a gendered persona and attribute emotions to an event like 'When I had a serious argument with a dear person'. We then analyze the emotions generated by the models in relation to the gender-event pairs. We find that all models consistently exhibit gendered emotions, influenced by gender stereotypes. These findings are in line with established research in psychology and gender studies. Our study sheds light on the complex societal interplay between language, gender, and emotion. The reproduction of emotion stereotypes in LLMs allows us to use those models to study the topic in detail, but raises questions about the predictive use of those same LLMs for emotion applications." | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters