Empathy Helps Counter Hate Speech
5773 Views0 Comment

Online hate speech has become a pressing issue worldwide. On social networks, comments and posts vilify or harass people because of their sexual orientation, religion, or ethnicity. Beyond the harm to individuals, hate speech is a threat to democracy, as it can discourage those who are targeted from participating in public debate.

To moderate hateful comments, many social media platforms have developed sophisticated filters. However, these are only a partial solution. According to internal Facebook documents leaked in October 2021, the company estimates that it can delete only 5 percent of the hateful comments posted. Furthermore, automatic filters are imprecise and could undermime freedom of speech.

Inducing empathy for those affected

Rather than delete problematic comments, which could prompt accusations of censorship, organizations can counter it with alternative messages or narratives. This practice, known as “counterspeech,” has become more common, but little is known about which counterspeech strategies are most effective in addressing online hostility. A team of researchers led by Dominik Hangartner, IPL co-director and professor of public policy at ETH Zurich, has joined forces with colleagues at the University of Zurich to investigate what kind of messages could lead authors of hate speech to refrain from such postings in the future

Using machine learning methods, the researchers identified 1,350 English-speaking Twitter users who had published racist or xenophobic content. They randomly assigned these accounts to a control group or one of following three counterspeech strategies: messages that elicit empathy with the group targeted by racism; humor; or a warning of possible consequences.

The results were clear: Only counterspeech messages that elicit empathy with the people affected by hate speech are likely to persuade the senders to change their behavior. An example of such a response could be: “Your post is very painful for Jewish people to read…” Compared to the control group, the authors of hateful tweets posted around one-third fewer racist or xenophobic comments after such an empathy-inducing intervention. Additionally, the probability that a hate tweet was deleted by its author increased significantly. In contrast, the authors of hate tweets barely reacted to humorous counterspeech. Reminding senders that their family, friends and colleagues could see their hateful comments was not effective, either. This is striking because these two strategies are frequently used by organizations that are committed to combatting hate speech.

“We have certainly not found a panacea against hate speech on the internet, but we have uncovered important clues about which strategies might work, and which do not,” says Hangartner. What remains to be studied is whether all empathy-based responses work similarly well, or whether particular messages are more effective. For example, hate speech authors could be encouraged to put themselves in the victim’s shoes or be asked to adopt an analogous perspective (“How would you feel if people talked about you like that?”).

Blending teaching and research

Alongside Professors Karsten Donnay and Fabrizio Gilardi from the University of Zurich’s Digital Democracy Lab, 13 graduate students from the ETH Center for Comparative and International Studies (CIS) also were deeply involved in the project. The students participated in all phases of the project, from developing an algorithm to detect hate tweets to testing the strategies on Twitter and conducting statistical analysis and project management.

“To me, this new type of collaborative seminar exemplifies a form of education that equips students with important tools, not only for data science and social science but also for research ethics. My hope is that this hands-on education enables them to make a positive impact in the field of digitalization and social media,” says Hangartner.

The students involved take a similar view. “We haven’t just read about other people’s research; now we also know how a big research project works,” says Laurenz Derksen. “Although there was a lot of work involved, this experiment lit a fire in me and got me excited about ambitious and collaborative research,” Derksen continues.

Buket Buse Demirci, who is now a doctoral student, felt that the project went far beyond the normal scope of seminars. As an example, she cites the Pre-Analysis Plan: the public registration of every single research step before the start of the experiment, which heightens the credibility of the statistical analyses as well as the reliability of the results. Another motivating factor, she says, is that all 13 students are listed as co-authors on the study detailing the results, which is published in one of the most prestigious interdisciplinary science journals. “I’ve contributed to a study that has not only been published in a scientific journal but could also have an impact in the real world,” says Demirci.

Practical applications through NGOs and media

Hangartner is aware that this type of research, which is embedded in a seminar, may sometimes also yield null results. Yet the experience is valuable for the students in any case, he says. It can help them anticipate what to expect if they embark on PhD studies and provides hands-on research experience, which is an asset for many different careers inside and outside of academia.

The collaborative research seminar is part of a more comprehensive project to develop algorithms that detect hate speech, and to test and refine further counterspeech strategies. To this end, the research team is collaborating with the Swiss women’s umbrella organisation alliance F, which has initiated the civil society project Stop Hate Speech. Through this collaboration the scientists can translate their research insights into practice and provide an empirical basis for alliance F to optimize the design and content of their counterspeech messages.

“The research findings make me very optimistic. For the first time, we now have experimental evidence that shows the efficacy of counterspeech in real-life conditions,” says Sophie Achermann, executive director of alliance F and co-founder of Stop Hate Speech. Also involved in the research project, which was sponsored by the Swiss innovation agency Innosuisse, are the media companies Ringier and TX Group via their newspapers Blick and 20 Minuten respectively.

LOCATION

Twitter

RESEARCH QUESTION

What kinds of messages work best to counter hate speech on social media?

TEAM

Dominik Hangartner

Gloria Gennaro

Sary Alasiri

Nicholas Bahrich

Alexandra Bornhoft

Joseph Boucher

Buket Buse Demirci

Laurenz Derksen

Aldo Hall

Matthias Jochum

Maria Murias Munoz

Marc Richter

Franziska Vogel

Salomé Wittwer

Felix Wüthrich

Fabrizio Gilardi

Karsten Donnay

RESEARCH DESIGN

Social media field experiment

KEY STAT

Authors of hateful tweets posted around one-third fewer racist or xenophobic comments after an empathy-inducing intervention compared to those in the control group