Twitter Finally Banned Revenge Porn. Now How to Enforce It?


Last night, Twitter made the non-consensual sharing of “intimate photos and videos” (read: revenge porn) a violation of its user rules. It’s a change as necessary as it was overdue, and it signals serious intent to rid the platform of trolls and other bad actors. What’s less clear? Whether Twitter has the means to turn that intent into reality.


Set aside for a moment the confounding fact that until just a few hours ago posting nude pictures of another unwilling human was kosher in the Twitterverse. Progress is progress, even if it comes slowly. The new Twitter rules are plainly stated, unambiguous, and are designed to help a lot of vulnerable people:



Private information: You may not publish or post other people’s private and confidential information, such as credit card numbers, street address or Social Security/National Identity numbers, without their express authorization and permission. You may not post intimate photos or videos that were taken or distributed without the subject’s consent.



Emphasis added. This is a good thing! At the very least it should act as a deterrent to the everyday deviants, the scorned exes and the dumb teens.


What it doesn’t do, though, is address the root problem behind what can make Twitter feel less like a 21st century salon than a digital mud pit. Moreover, a rule like this isn’t worth much unless you’re prepared to enforce it. Plus, the new rules also still put the burden of reporting on the victim, which has to change if it’s going to do any meaningful good.


The Rule of Law


How will Twitter administer these new rules? It has several policies in place. As it outlined in response to a series of questions from Buzzfeed, the platform has a team working day and night to address complaints from its users, and violators will have their accounts suspended.


That’s a good start. In practice, though, it’s likely to be lumbering at best, and at worst largely ineffective. To get an offending picture taken down, the victim has to first know that it exists, then ask Twitter to remove it, and finally wait an unspecified and agonizing amount of time for a ruling. In the meantime, a determined troll can have set up a few dozen more accounts, each ready to post the same photo, triggering the entire process again. This is like trying to stamp out an entire ant colony by stepping on one mound.


If it seems far-fetched that someone would go through all that trouble to cause another human embarrassment and pain, very recent history has demonstrated otherwise. During the peak of the Gamergate movement, female game developers—and many, many others—found themselves the victims of a barrage of Twitter harassment, some of which included the outing of “private and confidential information” that Twitter had already banned. As soon as one vindictive account was shuttered, another sprung up in its place.


So even with these new rules, finding relief when you know you’re a target will be difficult. And if your nudes are being distributed on Twitter without your knowledge, it would seem to be impossible, since takedown requests need to come directly from the aggrieved party.


What does that mean in practice? Look at Reddit, which two weeks ago announced similar anti-revenge porn policies that went into effect this past Tuesday. So far it’s done nothing to alter the practices of forums like CandidFashionPolice, a thinly veiled creepshot emporium. These aren’t strictly nudes, but they are clearly sexualized, and definitionally nonconsensual. In most cases we can assume that the photographed women will never know those pictures exist, and so will never be able to ask for their removal.


What Else Could Twitter Do?


Outlawing revenge porn is an important, necessary first step to making Twitter a more secure environment. The company has also made a concerted effort to streamline the process of reporting harassment and abuse over the last few months, and promises even further action going forward. It’s clearly making an effort.


But these security measures as currently implemented are entirely reactive; it puts the onus on the victim to make it right.


If Twitter really wanted to beat back the trolls, it has more aggressive options available to it. Limit the number of accounts that can be associated with the same IP address. Collaborate with law enforcement. There are a few reports that the company is already taking some of these more stern measures, requiring new users signing up through the indentity-masking Tor browser to provide a phone number for verification. While Twitter deflected those reports, a report from TechCrunch appears to confirm its legitimacy.


When we asked Twitter if this policy update came with any new enforcement methods, it referred us to the same FAQ that outlined the existing process.


But here’s the point: Ultimately, eradicating this type of abuse might be impossible without fundamentally altering Twitter itself. The service’s commitment to the anonymity and privacy of its users causes plenty of harassment headaches, yes. Yet that is also what makes Twitter a valuable tool for political movements like the Arab Spring. It’s a situation without a clean answer: The same policies that ensure the safety of some users leave others exposed.


If nothing else, Twitter’s policy update shows that it actively wants to engage with these complications rather than just let them play out. There likely isn’t a single solution that helps everyone in the exact way that they need. There’s no privacy panacea. The work to ensure people’s safety is nowhere near done. There is, though, a little bit more hope for victims of online abuse today than there was yesterday. It’s not enough, but it’s a start.



No comments:

Post a Comment