Since its earliest days, flaming, trolling, harassment and abuse have plagued the Internet. Our aim is to computationally model abusive online behavior to build tools that help counter it, with the goal of making the Internet a more welcoming place. In particular, we look at a novel approach to identify online verbal abuse using cross-community linguistic similarities between posts on different communities. This work will enable a transformative new class of automated and semi-automated applications that depend on computationally generated abuse predictions.
The comp.social lab focuses on the design and analysis of social media. According to their website they "like puppies, mixed methods and new students (particularly MS)."