A Novel Methodology for Developing Automatic Harassment Classifiers for Twitter

Screen Shot 2020-12-03 at 4.46.48 PM.png

Video


Team Information

Team Members

  • Ishaan Arora, M.S. Candidate in Computer Science, Columbia Engineering (SEAS)

  • Julia Guo, Undergraduate Student in Computer Science, Columbia College

  • Sarah Ita Levitan, Assistant Professor, Hunter College)

  • Susan McGegor, Associate Research Scholar and Co-Chair, DSI Data, Media and Society Center

  • Julia Hirschberg, Percy K. and Vida L. W. Hudson Professor of Computer Science, Columbia University

Abstract

Most efforts at identifying abusive speech online rely on public corpora that have been scraped from websites using keyword-based queries or released by site or platform owners for research purposes. These are typically labeled by crowd-sourced annotators – not the targets of the abuse themselves. While this method of data collection supports fast development of machine learning classifiers, the models built on them often fail in the context of real-world harassment and abuse, which contain nuances less easily identified by non targets. Here, we present a mixed-methods approach to create classifiers for abuse and harassment which leverages direct engagement with the target group in order to achieve high quality and ecological validity of data sets and labels, and to generate deeper insights into the key tactics of bad actors. We use women journalists’ experience on Twitter as an initial community of focus. We identify several structural mechanisms of abuse that we believe will generalize to other target communities.

Contact this Team

Team Contact: Ishaan Arora (use form to send email)

Previous
Previous

Detecting Sensor-Based Repackaged Malware

Next
Next

VisiFit: Structuring Iterative Improvement for Novice Designers