← smac.pub home

ToxCCIn: Toxic Content Classification with Interpretability

pdf bibtex 15 citations workshop paper

Authors: Tong Xiang, Sean MacAvaney, Eugene Yang, Nazli Goharian

Appeared in: 11th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis (WASSA @ EACL 2021)

Links/IDs:
DBLP conf/wassa/XiangMYG21 arXiv 2103.01328 Google Scholar 7wWfoDgAAAAJ:4DMP91E08xMC Semantic Scholar 4d874a47870dfed68a64acb3dabcc8b113cb1836 Enlighten 234947 smac.pub wassa2021-toxic

Abstract:

Despite the recent successes of transformer-based models in terms of effectiveness on a variety of tasks, their decisions often remain opaque to humans. Explanations are specifically important for tasks like offensive language or toxicity detection on social media because a manual appeal process is often in place to dispute automatically flagged content. In this work, we propose a technique to improve the interpretability of these models, based on a simple yet powerful assumption: a post is at least as toxic as its most toxic span. We incorporate this assumption into transformer models by scoring a post based on the maximum toxicity of its spans and augmenting the training process to identify correct spans. We find this approach is effective and can produce explanations that exceed the quality of those provided by logistic regression analysis (often regarded as a highly-interpretable model), according to a human study.

BibTeX @inproceedings{xiang:wassa2021-toxic, author = {Xiang, Tong and MacAvaney, Sean and Yang, Eugene and Goharian, Nazli}, title = {ToxCCIn: Toxic Content Classification with Interpretability}, booktitle = {11th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis}, year = {2021}, url = {https://arxiv.org/abs/2103.01328} }