Hey Twitter, Killing Anonymity’s a Dumb Way to Fight Trolls


Tor users started reporting last week that they are being prompted more frequently than ever for a phone number confirmation when creating a new Twitter account—or in some cases when using a long-standing account. This development is disastrous for the free speech the platform generally stands for, and will likely not curb the abuse for which it has come under fire. If this change was targeted at that harassment—addressing the leaked acknowledgment from CEO Dick Costolo that “We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years”—it’s a dangerous example of the Politician’s Syllogism: we must do something; this is something; therefore, we must do this.


To be clear: Twitter has denied claims that it’s specifically singling out Tor users for phone number confirmation. Twitter says instead that spam-like behavior is being flagged for additional checks. And to a point, that’s probably true. But we’ve known since at least last November that the service could deploy mandatory phone number verification against accounts engaged in harassment. Tor use is likely one of many signals that are weighed when determining whether a sign-up is from a new, valid user, or whether it’s an account that will be used for spam or abuse.


Only Twitter can determine what these signals are and how they’re balanced; if the company considers Tor use a very strong indicator of bad behavior, then Tor users will be disproportionately targeted for measures like phone number checks.


Unfortunately, that undermines the anonymity of the people who need it most, without necessarily providing protection for targets of harassment.


Twitter users absolutely need to have more control over what messages they see and who may interact with them. But beyond that baseline, even well-intentioned solutions can cause harm if they’re informed by an inadequate understanding of how harassment works on the platform. Most visibly, Twitter briefly changed the behavior of the block feature in late 2013 to address what it perceived as a major problem: users were precluded from following or retweeting the accounts that blocked them, so it was easy to tell if you had been blocked.


This update was purportedly aimed at reducing offline retaliation for those blocks, but the backlash was swift. Users depended on blocks to stop some kinds of abusive behaviors that would be allowed under the new rules. Without an understanding of those abuse patterns, Twitter fell back to the familiar syllogism: here’s a problem; we must do something; this is something; we must do this.


The block behavior modification was reversed in mere hours, in large part because Twitter as a platform provided a megaphone to the people most affected by the change. Communities of women and people of color who are subjected to frequent abuse on the platform were able to take advantage of the medium to call on Twitter to #RestoreTheBlock, and get their voices heard.


Unfortunately, some voices can be so profoundly silenced by a pierced veil of anonymity that they won’t be around to protest unannounced updates. For instance, activists and journalists in countries where Twitter is forbidden use Tor to circumvent the censorship technology that blocks the site, and to do so without being traceable by the national internet service providers and phone network operators. Changes that make it harder to use Twitter and Tor in combination end up doing real harm to some of the speech that is most marginalized.


Asking these people to provide a phone number puts them at risk: in many places they will be forced to tie any phone number to a real-life identity. To pick just one example, a new law in Pakistan requires fingerprints from all cell phone users. Similar laws are common around the world, and obtaining a truly anonymous cell phone can be prohibitively difficult. Beyond that, a national phone provider could be pressed to provide details, say, of who is receiving Twitter confirmation texts.


Abuse on Twitter comes from accounts using real names, and from accounts using pseudonyms. It comes from accounts with massive follower bases, and from so-called “egg” accounts that are freshly made. It can come from a user with a handful of sock puppets, or a small army of one troll’s dedicated fans. Very few of these categories rely on Tor for their trolling, and they’re not likely to be affected. But other people that rely on strong anonymity will.


Twitter has to dedicate resources to learning how abuse works systemically on the platform and put powerful tools in the hands of users that need them. It needs to craft clearer terms of service and give the sense that it is enforcing them. These steps may be difficult, but they’re necessary. Cracking down on anonymity tools may seem like something to do, but Twitter—and the other online platforms we count on—need to do better than just doing something.



No comments:

Post a Comment