A group of NYU and University of Illinois at Chicago computer scientists have presented a paper at the 2017 ACM Internet Measurement Conference in London presenting their findings in a large-scale study of online doxings, with statistics on who gets doxed (the largest cohort being Americann, male, gamers, and in their early 20s), why they get doxed ("revenge" and "justice") and whether software can detect doxing automatically, so that human moderators can take down doxing posts quickly.
The researchers also showed (unsurprisingly) that people who've been doxed are extremely likely to either delete or lock down their online profiles.
One interesting proposal to come out of the paper is automated generation of anti-SWATing watchlists, that police could use to evaluate whether a callout for an active shooter or other SWATing scenario is likely to be a hoax.
In this paper, we have presented the first quantitative study of doxing. We created an automated framework to detect doxes online, extracted online social networking accounts referenced in each dox file, and monitored these accounts for signs of abuse. Based on our manual analysis of these doxes, we were able to measure and understand the kinds of highly sensitive information shared in dox files.
Through these techniques, we were able to measure how many people are targeted by doxing attacks on popular text sharing services. We found that doxing happens frequently on these sites, and that some demographics and communities are disproportionately targeted. We find that doxing victims in our data set are overwhelmingly male, have an average age in their 20s, and a significant number are part of gamer communities (or maintain accounts with multiple video-game related websites). We also find that most doxes include highly identifying information of the victim and family members, such as full legal names, phone numbers and online social networking accounts. We found that doxing victims were dramatically more likely to close or make social networking accounts private after being doxed, and that abuse filters deployed by Instagram and Facebook successfully reduced how frequently doxing victims had to closed or increased the privacy of their accounts.
We hope that our quantitative approach helps researchers, internet users and web-services providers understand the scope and seriousness of online doxing. We also hope that our work can complement existing, qualitative work on doxing in order to provide a fuller understanding of doxing abuse online. Finally, we hope that maintainers of online social networks, law enforcement, and other parties with an interest in keeping internet users safe from the effects of doxing can use our techniques to mitigate the harm doxing causes.
Fifteen Minutes of Unwanted Fame: Detecting and Characterizing Doxing [Peter Snyder, Chris Kanich, Periwinkle Doerfler and Damon McCoy/ACM Internet Measurement Conference 2017]
(via 4 Short Links)
(Image: Schutz, CC-BY-SA)