Threat moderation on social media has been subject to much public debate and criticism, especiall... more Threat moderation on social media has been subject to much public debate and criticism, especially for its broadly permissive approach. In this paper, we focus on Twitter's Violent Threats policy, highlighting its shortcomings by comparing it to linguistic and legal threat assessment frameworks. Specifically, we foreground the importance of accounting for the lived experiences of harassment-how people perceive and react to a tweet-a measure largely disregarded by Twitter's Violent Threats policy but a core part of linguistic and legal threat assessment frameworks. To illustrate this, we examine three tweets by drawing upon these frameworks. These tweets showcase the racist, sexist, and abusive language used in threats towards those who have been marginalized. Through our analysis, we highlight how content moderation policies, despite their stated goal of promoting free speech, in effect, work to inhibit it by fostering an online toxic environment that precipitates self-censorship in fear of violence and retaliation. In doing so, we make a case for technology designers and policy makers working in the sphere of content moderation to craft approaches that incorporate the various nuanced dimensions of threat assessment toward a more inclusive and open environment for online discourse. CONTENT WARNING: This paper contains strong and violent language. Please use discretion when reading, printing, or recommending this paper. CCS CONCEPTS • Human-centered computing → Collaborative and social computing; Collaborative and social computing theory, concepts and paradigms; Social media; • Social and professional topics → Computing/technology policy; Censorship; Hate Speech.
“We found no violation!”: Twitter's Violent Threats Policy and Toxicity in Online Discourse
Threat moderation on social media has been subject to much public debate and criticism, especiall... more Threat moderation on social media has been subject to much public debate and criticism, especially for its broadly permissive approach. In this paper, we focus on Twitter's Violent Threats policy, highlighting its shortcomings by comparing it to linguistic and legal threat assessment frameworks. Specifically, we foreground the importance of accounting for the lived experiences of harassment—how people perceive and react to a tweet—a measure largely disregarded by Twitter's Violent Threats policy but a core part of linguistic and legal threat assessment frameworks. To illustrate this, we examine three tweets by drawing upon these frameworks. These tweets showcase the racist, sexist, and abusive language used in threats towards those who have been marginalized. Through our analysis, we highlight how content moderation policies, despite their stated goal of promoting free speech, in effect, work to inhibit it by fostering an online toxic environment that precipitates self-censo...
Uploads
Papers by Pooja Casula