For Google, the ‘Right to Be Forgotten’ Is an Unforgettable Fiasco


Illustration: Getty Images

Illustration: Getty Images



The recent European Union ruling that granted citizens the “right to be forgotten” from Google’s search results is morphing into a nightmare for the web giant.


British news organizations are reporting that Google is now removing links to some of their articles, including stories that involve the disgraceful actions of powerful people.


On Wednesday, BBC economics editor Robert Preston said he received a notice from Google informing him that a 2007 blog post he wrote on ex-Merrill Lynch CEO Stan O’Neal would no longer turn up in search results in Europe. Meanwhile, the Guardian reports that it received automated notifications on six articles whose links would be axed from European view. These included stories on a scandalized Scottish Premier League soccer referee, an attorney facing fraud allegations, and, without explanation, a story on French office workers making Post-It art.


In some ways, Google is just following the EU’s dictates. The company fought the EU on the right-to-be-forgotten issue, but now it has no choice but to implement the ruling, which the court says applies “where the information is inaccurate, inadequate, irrelevant or excessive.” By that standard, these takedowns would seem to overstep the letter of a decision ostensibly intended to protect the reputation of individuals, not censor news. But the issue for Google isn’t just freedom of speech or freedom of the press. The “right to be forgotten” decision is calling unwanted attention to the easy-to-forget fact that–one way or another—fallible human hands are always guiding Google’s seemingly perfect search machine.


The ‘right to be forgotten’ decision is calling unwanted attention to the easy-to-forget fact that fallible human hands are always guiding Google’s seemingly perfect search machine.


The BBC’s Preston writes that the removal of his post could be an example of clumsiness on Google’s part in the still-early days of its effort to comply with the EU’s judgment. “Maybe I am a victim of teething problems,” he writes. “It is only a few days since the ruling has been implemented—and Google tells me that since then it has received a staggering 50,000 requests for articles to be removed from European searches.” That means things may get less censorious. But in the meantime, the fiasco is chipping away at the gleaming edges of Google’s brand.


The removal of links to one article may be a blip, but the steady accumulation of removed links—especially to quality journalism written in a clear spirit of public interest—starts to erode trust in the reliability of Google search results. Now, anyone who does a Google search even just for the article mentioned above will have to wonder whether they’re getting the whole story. And anything that suggests compromise, lack of transparency, or incompleteness in search results plants a seed that starts to undermine the idea of what Google is supposed to be.


Since the beginning, Google has cultivated the idea that its results are—like good journalism—unbiased, complete, and compelling. Nowhere is that message more clearly telegraphed than in the design of Google’s search interface itself. Google isn’t a person. It’s just this little box. Put your search here and the smartest computers in the world will tell you what you need to know—no messy human judgment involved.


In reality, however, teams of living, breathing people are constantly at work behind the scenes at Google tweaking algorithms to juice search results according to subjective standards like “quality.” This is often a good thing. Concerted efforts to cut down on the proliferation of link spam and content-farmed drivel have kept search results truly useful, which is good for users and Google both. But as Facebook has experienced even more strongly in the backlash to its “emotional contagion” study, users prefer not to be reminded that human-crafted filters unavoidably come into play in the dissemination of digital content.


Assessing the reaction to Facebook’s study, social media scholar danah boyd writes that the anger is really “at the practice of big data” itself—the idea that Facebook and other companies that “collect and use data about people” are far from just neutral facilitators. Facebook “designs its algorithms not just to market to you directly but to convince you to keep coming back over and over again. People have an abstract notion of how that operates, but they don’t really know, or even want to know,” boyd writes. “They just want the hot dog to taste good. Whether it’s couched as research or operations, people don’t want to think they’re being manipulated. So when they find out what soylent green is made of, they’re outraged.”


The same could just as well apply to Google. People don’t want to think their search results are fungible, which means Google’s interests are also best served by keeping that notion obscured behind a simple search box. The censorship of news articles under the abuse of the “right to be forgotten” is just a much more blatant reminder of that fungibility—a reminder that Google would clearly like everyone to just forget.



No comments:

Post a Comment