Glenn Fleishman reports on how the platform could fix its harassment problem.

Twitter has an abuse problem, and it's not the one that you think. While there are endless complaints about the service's inability to respond rapidly to reports of harassment—especially sustained campaigns—its power is limited even when it does act. It can suspend an account, temporarily or forever, but not prevent abusers from creating new ones.

The recent GamerGate saga, in particular, highlighted sequential account creation as the weapon of choice for maintaining a relentless stream of online abuse. To foster account creation—and its business model, stock-market valuation, and advertising rates—Twitter requires only an email address to start an account.

I'm sure Twitter fights spam heavily, because that degrades the user experience in general. But the same rules—and spamfighting tools such as blocks on IP ranges and automated account creation—are of little use against a determined individual willing to trigger permanent account suspension for multiple accounts over a period of time. One must work awfully hard to earn that level of animosity from Twitter in the first place, yet the social network is seemingly helpless to keep them off, because, unlike Facebook, it requires no proof of identity, nor does it vet later for a lack of it.

The firm responded to a request about sequential abuse by pointing me to its policy:

Serial Accounts: You may not create multiple accounts for disruptive or abusive purposes, or with overlapping use cases. Mass account creation may result in suspension of all related accounts. Please note that any violation of the Twitter Rules is cause for permanent suspension of all accounts.

Twitter didn't respond to a follow-up query asking about how it enforces a sequential ban.

While it's an obvious problem, the extent isn't widely known or quantified, and unless you're the victim of harassment, you may be unaware of it. Women and people of color receive a vastly disproportionate percentage of harassment on Twitter (as well as online).

GamerGate is the largest, most sustained, and most lopsided case I've ever seen of this. I am aware of several individuals who are on their umpty-umpth account; in one case, I know of six previous accounts, all using the same (and real) public name. Due to requests by people being actively harassed, sometimes with an actual physical component, I'm avoiding more detail. However, Kotaku just ran a story about a game-focused harasser who has had accounts suspended in the "dozens if not hundreds" for violation of terms.

In the normal scheme of behavior and consequences, someone who had put any time into her or his Twitter account — accumulating followers, a history of messages, and becoming recognized by a handle — would be deterred by the loss of this identity from pushing too far. Someone who went over the line so egregiously that Twitter suspended the account permanently would conceivably take that reprimand as a disincentive to either return with a new account or to behave as extremely in the future.

So when we talk about sequentially suspended individuals, we're talking about folks deeply invested in their need to speak, whether to troll, grief, or rage. In the case of GamerGate and other political areas that attract harassers, there's the supposition that a significant percentage of accounts are sock puppets: disposable creations of someone who can afford to burn those accounts without losing their central account and credibility. (For an extreme case of sock puppetry, read Aja Romano's account of a decade-long trolling by a hate blogger whose identity was recently tied to an up-and-coming science fiction writer.)

Twitter's policy would clearly demand abusers not create new accounts and perpetuate the same behavior. Its abuse response provides a high-enough bar, that it takes the same effort by those targeted to report recidivists, though in monitoring some of these individuals, it's clear Twitter doesn't play any games about their actions: the new accounts are quickly banned after being reported.

But it still puts the burden on victims. (Its partnership with Women, Action, & the Media (WAM) appears to be designed to test working with a third party to accept reports that have a lower bar to make.)

This particular case is yet another nail in the coffin of the desirability of pure anonymity in socially-engaged parts of the Internet. Anonymity is a powerful tool for free speech and democracy when countering the power of the state, and the Internet offers the fullest expression of such anonymity, but the fight for anonymous speech ends when promotion of it is inexorably and demonstrably linked to enabling harassers. You can see this in every online forum, every comment thread, every social network: where speech has no consequence, harassment flourishes.

As Model View Culture editor Shanley Kane tweeted, regarding manufactured controversy over the WAM reporting form, it's "patently false that we need to protect the abusive speech of dominant groups to protect the speech of marginalized groups."

By its business model, Twitter is reactive to abuse and rightly uses a light hand that protects legitimate parties from being dogpiled on but always keeps harassers active. But, like so many of us, Twitter's identity and suspension policies hope that there aren't that many sociopaths and nonconsensual sadists in the world. There may not be, but services that aren't oriented the right way, allow people with these personalities or conditions to amplify their efforts and dominate a space.

In its maturity, Twitter must better consider the difference between anonymity and pseudonymity. The former is a measure of how little one party knows of another; the latter, an attempt to wear a different identity. Hidden knowledge, coupled with public pseudonymity, allows commenting untied from one's true identity but not from consequence.

It may be that Twitter's best answer would be tiered users, of which it has but two kinds now: regular and "verified." Through a process that's not publicly defined, Twitter uses real-world identification of members of the press (typically in newsrooms), celebrities, and other accounts typically subject to impersonation. An article by Sarah Jeong at The Verge looks into reports that Twitter may have secret tools and configuration options that it can control behind the scenes.

I, and I'm sure others, would gladly submit to a slightly higher level of scrutiny that would allow me to assert myself as a human who accepts scrutiny on Twitter — and with optional switches that allow filtering or throttling of those who want their speech almost completely detached from identity. I don't want to squelch the voices of people for whom anonymity is important, but I also believe the current system favors an unknown identity over clearly documented paths to abuse.

† I received days of Twitter abuse for the temerity to plan to record a podcast with my friend, Brianna Wu, a frequent target of Gamergate fans.

Illustration: Rob Beschizza