Why Facebook Knows You Better Than Your Friends Do


facebook-experiments-inline

Getty Images



In the movie Her, Joaquin Phoenix’s character falls in love with his computer’s operating system, which through the magic of machine learning — and Hollywood — comes to know and understand him better than anyone else. It’s a futuristic critique of human reliance on technology. But according to one new study, it’s a future that may not be all that far away.


This week, researchers from the University of Cambridge and Stanford University released a study indicating that Facebook may be better at judging people’s personalities than their closest friends, their spouses, and in some cases, even themselves. The study compared people’s Facebook “Likes” to their own answers in a personality questionnaire, as well as the answers provided by their friends and family, and found that Facebook outperformed any human, no matter their relation to the subjects.


That’s a substantial finding, the researchers say, particularly given the fact that human beings are evolutionarily designed to have good personality judgement. It’s what keeps us out of danger and influences our relationships. But the realization that, perhaps, computers are better equipped to make these judgements than humans are could help cut through the natural bias that pervades human interactions. Never mind what this says about how much of power Facebook wields.


“We’re walking personality prediction machines,” says Michal Kosinski, a computer science professor at Stanford, “but computers beat us at our own game.”


“Likes” Predict What You’re Like


The researchers began with a 100-item personality questionnaire that went viral after David Stillwell, a psychometrics professor at Cambridge, posted it on Facebook back in 2007. Respondents answered questions that were meant to root out five key personality traits: openness, conscientiousness, extraversion, agreeableness, and neuroticism. Based on that survey, the researchers scored each respondent in all five traits.


Then, the researchers created an algorithm and fed it with every respondent’s personality scores, as well as their “Likes,” to which subjects voluntarily gave researchers access. The researchers only included “Likes” that respondents shared with at least 20 other respondents. That enabled the model to connect certain “Likes” to certain personality traits. If, for instance, several people who liked Snooki on Facebook also scored high in the extroverted category, the system would learn that Snooki lovers are more outgoing. The more “Likes” the system saw, the better its judgment became.


In the end, the researchers found that with information on just ten Facebook “Likes,” the algorithm was more accurate than the average person’s colleague. With 150 “Likes,” it could outsmart people’s families, and with 300 “Likes,” it could best a person’s spouse.


What’s more, at times, the Facebook model could beat the subjects’ own answers. As part of the survey, the researchers also asked respondents to answer concrete questions such as how many drinks they have a week or what type of career path they’ve chosen. Then, they tried to see if they could predict how many drinks someone was likely to have in a week based on their answers to the personality test.


Once again, they found that Facebook Likes were a better indicator of people’s substance use than even their own questionnaires were. “When people take the questionnaire, they present themselves in a slightly more positive way than they really are,” Kosinski says. “This tendency to self-enhance makes computers slightly more objective.”


Computers Don’t Like (Or Dislike) Any of Us


While the researchers admit the results were surprising, they say there’s good reason for it. For starters, computers don’t forget. While our judgment of people may change based on our most recent — or most dramatic — interactions with them, computers give a person’s entire history equal weight. Computers also don’t have experiences or opinions of their own. They’re not limited by their own cultural references, and they don’t find certain personality traits, likes, or interests good or bad. “Computers don’t understand that certain personalities are more socially desirable,” Kosinski says. “Computers don’t like any of us.”


That said, there are limitations to what computers can understand, too. They can’t read facial expressions or take subtle cues like humans can. And Kosinski admits that a study like this is likely to be much more effective among a younger subset of people, who are more likely to share their personal information on Facebook.


Still, Kosinski rejects the notion that Facebook “Likes” reveal only the most superficial aspects of someone’s personality. “I think it’s the other way around,” he says. “I think the computer can see through the prejudice we all have.”


That, he says, could have implications far beyond Facebook. Sure, this trove of personality data could turn Facebook into more of an advertising powerhouse than it already is. But more importantly, Kosinski says, it could help all of us from being stereotyped or categorized based on other people’s biases. “Computers don’t care if you’re a man, woman, old, young, black, or white,” Kosinski says. “This gives us a cheap, massive, fake-proof algorithm to judge the personality of millions of people at once.”



Why Facebook Knows You Better Than Your Friends Do


Her

Warner Bros.



In the movie Her, Joaquin Phoenix’s character falls in love with his computer’s operating system, which through the magic of machine learning — and Hollywood — comes to know and understand him better than anyone else. It’s a futuristic critique of human reliance on technology. But according to one new study, it’s a future that may not be all that far away.


This week, researchers from the University of Cambridge and Stanford University released a study indicating that Facebook may be better at judging people’s personalities than their closest friends, their spouses, and in some cases, even themselves. The study compared people’s Facebook “Likes” to their own answers in a personality questionnaire, as well as the answers provided by their friends and family, and found that Facebook outperformed any human, no matter their relation to the subjects.


That’s a substantial finding, the researchers say, particularly given the fact that human beings are evolutionarily designed to have good personality judgement. It’s what keeps us out of danger and influences our relationships. But the realization that, perhaps, computers are better equipped to make these judgements than humans are could help cut through the natural bias that pervades human interactions. Never mind what this says about how much of power Facebook wields.


“We’re walking personality prediction machines,” says Michal Kosinski, a computer science professor at Stanford, “but computers beat us at our own game.”


“Likes” Predict What You’re Like


The researchers began with a 100-item personality questionnaire that went viral after David Stillwell, a psychometrics professor at Cambridge, posted it on Facebook back in 2007. Respondents answered questions that were meant to root out five key personality traits: openness, conscientiousness, extraversion, agreeableness, and neuroticism. Based on that survey, the researchers scored each respondent in all five traits.


Then, the researchers created an algorithm and fed it with every respondent’s personality scores, as well as their “Likes,” to which subjects voluntarily gave researchers access. The researchers only included “Likes” that respondents shared with at least 20 other respondents. That enabled the model to connect certain “Likes” to certain personality traits. If, for instance, several people who liked Snooki on Facebook also scored high in the extroverted category, the system would learn that Snooki lovers are more outgoing. The more “Likes” the system saw, the better its judgment became.


In the end, the researchers found that with information on just ten Facebook “Likes,” the algorithm was more accurate than the average person’s colleague. With 150 “Likes,” it could outsmart people’s families, and with 300 “Likes,” it could best a person’s spouse.


What’s more, at times, the Facebook model could beat the subjects’ own answers. As part of the survey, the researchers also asked respondents to answer concrete questions such as how many drinks they have a week or what type of career path they’ve chosen. Then, they tried to see if they could predict how many drinks someone was likely to have in a week based on their answers to the personality test.


Once again, they found that Facebook Likes were a better indicator of people’s substance use than even their own questionnaires were. “When people take the questionnaire, they present themselves in a slightly more positive way than they really are,” Kosinski says. “This tendency to self-enhance makes computers slightly more objective.”


Computers Don’t Like (Or Dislike) Any of Us


While the researchers admit the results were surprising, they say there’s good reason for it. For starters, computers don’t forget. While our judgment of people may change based on our most recent — or most dramatic — interactions with them, computers give a person’s entire history equal weight. Computers also don’t have experiences or opinions of their own. They’re not limited by their own cultural references, and they don’t find certain personality traits, likes, or interests good or bad. “Computers don’t understand that certain personalities are more socially desirable,” Kosinski says. “Computers don’t like any of us.”


That said, there are limitations to what computers can understand, too. They can’t read facial expressions or take subtle cues like humans can. And Kosinski admits that a study like this is likely to be much more effective among a younger subset of people, who are more likely to share their personal information on Facebook.


Still, Kosinski rejects the notion that Facebook “Likes” reveal only the most superficial aspects of someone’s personality. “I think it’s the other way around,” he says. “I think the computer can see through the prejudice we all have.”


That, he says, could have implications far beyond Facebook. Sure, this trove of personality data could turn Facebook into more of an advertising powerhouse than it already is. But more importantly, Kosinski says, it could help all of us from being stereotyped or categorized based on other people’s biases. “Computers don’t care if you’re a man, woman, old, young, black, or white,” Kosinski says. “This gives us a cheap, massive, fake-proof algorithm to judge the personality of millions of people at once.”



Check Out These Jaw-Dropping Prints of Classic Nintendo Boss Fights


bossfight

Nick Derington/Nakatomi, Incorporated



Step 1: Walk into your living room. Step 2: Throw out that still-life you got from a 1989 Fingerhut catalog. Step 3: Replace it with these epic prints of iconic Nintendo bosses, and watch houseguests gawk in geeky awe.


Nick Derington, an artist based out of Austin, Texas, is the talent behind these hand-printed silkscreen pieces, which are also available in glow-in-the-dark versions.


Entitled “Boss Fight,” the set features Bowser, Ganon, and Mother Brain looking particularly horrifying (and Bowser looking rather undead), towering over their nemeses Mario, Link and Samus Aran. They’re captivating images, especially if you lived through these fights as a kid. Every piece is numbered and signed by Derington, and there’s only a hundred of each of the three prints available.


Each set of three measures 12 inches by 24 inches and sells for $100, with individual prints going for $40 each. (But don’t you want the full triptych?)


You may have seen Derington’s work elsewhere: He was the lead animator of the 2006 film A Scanner Darkly, has worked on graphic novels for DC and Marvel, and created a handheld statuette of H.P. Lovecraft’s creepy deity Cthulhu.


“Boss Fight” is on sale today over at Nakatomi, so pick up a few for yourself, or as a belated holiday gift for a friend—you can never be too late when it comes to nerdy home décor.



Check Out These Jaw-Dropping Prints of Classic Nintendo Boss Fights


bossfight

Nick Derington/Nakatomi, Incorporated



Step 1: Walk into your living room. Step 2: Throw out that still-life you got from a 1989 Fingerhut catalog. Step 3: Replace it with these epic prints of iconic Nintendo bosses, and watch houseguests gawk in geeky awe.


Nick Derington, an artist based out of Austin, Texas, is the talent behind these hand-printed silkscreen pieces, which are also available in glow-in-the-dark versions.


Entitled “Boss Fight,” the set features Bowser, Ganon, and Mother Brain looking particularly horrifying (and Bowser looking rather undead), towering over their nemeses Mario, Link and Samus Aran. They’re captivating images, especially if you lived through these fights as a kid. Every piece is numbered and signed by Derington, and there’s only a hundred of each of the three prints available.


Each set of three measures 12 inches by 24 inches and sells for $100, with individual prints going for $40 each. (But don’t you want the full triptych?)


You may have seen Derington’s work elsewhere: He was the lead animator of the 2006 film A Scanner Darkly, has worked on graphic novels for DC and Marvel, and created a handheld statuette of H.P. Lovecraft’s creepy deity Cthulhu.


“Boss Fight” is on sale today over at Nakatomi, so pick up a few for yourself, or as a belated holiday gift for a friend—you can never be too late when it comes to nerdy home décor.



Species of bird 'paints' its own eggs with bacteria to protect embryo

Researchers from the University of Granada and the Higher Council of Scientific Research (CSIC) have found that hoophoes cover their eggs with a secretion produced by themselves, loaded with mutualistic bacteria, which is then retained by a specialized structure in the eggshell and which increases successful hatching. So far this sort of behaviour has only been detected in this species of birds, and it is a mechanism to protect their eggs from infections by pathogens.



Through an experiment published in the Journal of Animal Ecology, scientists from several research groups precluded several female hoophoes from impregnating their eggs with this substance, which they produce themselves inside the so-called uropygial gland. The research groups involved in this project were the following: Animal Behaviour and Ecology, Microorganism-Produced Antagonistic Substances, both from the UGR, and Evolutive Ecology and the Behaviour and Conservation groups from the Dry Areas Experimental Station (Almería, CSIC)


By doing so they confirmed that the amount of pathogen bacteria that could be found inside the eggs which failed to hatch was higher in those nests in which they had experimentally precluded the females from using their secretion than in those where they were allowed to use this substance. They concluded that this secretion provides a barrier for the entry of pathogens towards the interior of the egg.


Presence of enterococci


On the other hand, not just the secretion as a whole, but particularly the bacteria that did produce bacteriocins (small antimicrobial proteins) in that secretion, the enterococci, are beneficial for the developing embryos, since successful hatches were directly related to the amount of these enterococci in the egg shells and in the secretions of the females. The more enterococci they had, the higher the rate in their successful hatching.


As UGR zoology professor, Manuel Martín-Vivaldi, one of the authors of this research underlines, during the last few years the field of evolutive ecology has acknowledged "the important role played by bacteria, not just as infectious agents capable of producing diseases, but also as allies of animals and other living creatures in their struggle against disease, due to their extraordinary capacity to synthesise compounds with antimicrobial properties"


In the case of the hoophoe's uropygical gland, scientists have confirmed that its components are very different from those of other birds. This is to a large extent due to the action of the bacteria present in this particular gland.


This research has also revealed that hoophoes have developed an exceptional property in their eggs -- which has not so far been found in any other species of bird. This consists in the presence in the surface of many small depressions that do not completely penetrate the shell, and whose function appears to be the retention of this bacteria-carrying secretion that covers the egg.


Bacteria in the eggshell


"With this experiment, we have been able to establish that if the females can use their secretion, towards the end of the incubation period, those tiny craters are full of a substance saturated with bacteria. If we preclude the use of this secretion, these tiny craters appear empty towards the end of the hatching process," said professors Martín-Vivaldi.


These results prove that in this particular species of bird, "its reproductive strategy has evolved hand in hand with the use of bacteria which may be beneficial for the production of antimicrobial substances, which they cultivate in their gland and then apply upon eggs which are particularly endowed to retain them"


These scientists are currently working to determine the specific composition of the bacterial community within the gland, how these symbionts are acquired, and the types of antimicrobial compounds which synthesize these bacteria, capable of protecting the embryos which are undergoing development.


Further research along these lines will facilitate a better understanding of the way in which mutualistic interactions function between animals and beneficial bacteria, and also to detect new antimicrobial substances with a potential to be used in medicine of for food preservation.


This study is the result of the following two projects: "Nests, parasites and bacteria: a multidisciplinary approach to the study of adaptation for breeding in high parasitism risk environments," funded by the Ministry of Science and Innovation, and "Biodiversity and acquisition mechanisms in the bacterial community within the uropygial gland of hoophoes (Upupa epops), funded by the Department of Innovation, Science and Business of the Junta de Andalucía (within the Programme of Incentives for Excellence in Research)




Story Source:


The above story is based on materials provided by University of Granada . Note: Materials may be edited for content and length.



Microsoft Turns to Verizon to Speed Up Its Video Delivery


Microsoft CEO Satya Nadella gestures while speaking during a press briefing on the intersection of cloud and mobile computing on March 27, 2014, in San Francisco.

Microsoft CEO Satya Nadella speaks during a press briefing on the intersection of cloud and mobile computing on March 27, 2014, in San Francisco. Eric Risberg/AP



Microsoft has made Verizon Communications the centerpiece of its effort to build out a worldwide network to speed up video delivery for users of its Azure cloud computing service.


To some, the move comes a bit of a surprise. Microsoft had been building out this network itself, according to observers, but now appears to have reversed course. When it comes to internet infrastructure, giant Web players like Microsoft typically build out their own offerings, rather than relying on service providers such as Verizon. And in recent years, these content delivery networks, or CDNs, have seen some big build-outs. When you’re Netflix and Google, it helps to have full control of your CDN, so that you can more efficiently deliver a significant slice of the internet’s Sunday evening data.


And Microsoft has already successfully built out a different CDN, to speed up delivery of its own web pages, videos, and monthly software updates. But the Azure network—built for customers rather than the company’s internal use—was a different type of project, says Dan Rayburn, a streaming media expert and blogger.


A Verizon spokeswoman confirmed to WIRED Monday that the company’s Digital Media Services CDN network is used by Microsoft and its Azure group, and according to Microsoft the traffic will be on the EdgeCast CDN, which Verizon purchased in 2013. Microsoft told WIRED, “We are happy to partner with EdgeCast to provide an integral component of the Azure Media Services workflow.”


But dumping its internal effort makes sense for Microsoft, says Rayburn, who who first broke the news of the Microsoft-Verizon deal. Azure’s biggest competitor, Amazon, has been offering CDN services—built on its very own delivery network—to cloud users for years. However, Microsoft was late to the game and offered a bare-bones product, Rayburn says. The Verizon deal, “gets them to market faster with a solution that has more capacity,” he says.


It also shows that there is constant flux, even in this era of do-it-yourself web giants.



How Do You Model This Coin Flip Bet?


The bet is this: we flip a coin (actually, Derek is the one making the bet). If you win the flip, you get twenty dollars. If you lose the flip, you lose only ten dollars. Got it? Would you take the bet? If you watch the video above you will find that most people would NOT take this bet. No one wanted to lose ten dollars. But Derek Muller (from Veritasium) claims that it’s worth considering – especially if you run the bet multiple times. So, what are the chances that you would lose money if you flipped the coin ten times? What about 100 times? Let’s find out.


Let me go ahead and say that there is indeed a way to calculate the probability of losing money on ten flips. But I don’t want to do it that way. I want to model this with a quick python program. Now I’m going to show you how to do that.


Modeling Coin Flips in Python


The first thing we will need is a random picker. In python, you will need to import the random module (not any random module, but THE random module). Actually, we only need one function from that module, the choice function. Check out this super simple program. Oh, if you don’t have python installed – try this.


 Randombet py Users Rjallain Projects Python Randombet py


When you run this code, you might get something like this:


Python 2 7 6 Shell


It’s that simple. Run it more than once just to convince yourself that it is indeed generating random “flips”. Even though this is cool, it’s not that useful. Let’s jump to something better. Here is a program that flips the coin ten times. If it is “heads” the player gets 20 dollars. If it’s tails, they lose 10 dollars. How much money do they have at the end of the game?


 Randombet py Users Rjallain Projects Python Randombet py


If you aren’t familiar with python, let me say something about loops. First, this isn’t the most efficient loop. There are cooler ways to flip this coin ten times. However, I like this way since it seems to make sense to many people. The basic idea is to have this counter variable (n) that goes up by one each time the loop is run. The loop does this until the value of N is greater than the desired number of loops (N). In python, anything that is tab-indented is part of the loop. When you outdent, it ends the loop. I know that seems weird, but that’s the way python works.


Go ahead. Change N to 100 and re run it. Change it to 1000. Python doesn’t really care how big that number is.


More Complicated and More Useful Code


Really I don’t want to flip the coin 100 times. I want to run the bet 100 times where I flip the coin 100 times. Or maybe I want to run the 100-flip bet 1,000 times. How do you do that? The key is to define a function. Let me just show you the code and then I will give a quick explanation.


 Randombet py Users Rjallain Projects Python Randombet py


First, there is the function. I defined the function “flip” at the beginning of the program. The function takes in three parameters (the number of flips, the amount you lose and the amount you win). Inside the function is essentially the same as our first looping program. The only thing added is the “return(total)”. In the rest of the program, if you put a line that says flip(10,-10,20) it would give you the amount of money you win after 10 flips.


There are a couple of other things to point out. First, this:


 Randombet py Users Rjallain Projects Python Randombet py


Numpy is another python module. We need this module in order to easily calculate the average of the amount of money that was won. That leads to the second thing that I need to mention – a list. Look at this line.


 Randombet py Users Rjallain Projects Python Randombet py


The variable “money” starts as an empty list. Every time I run the 100 flips, I am going to add the result to this list. To do that, I would just say money = money + [stuff] where stuff is the amount won. Now, after 1000 runs of 100 flips, I have a list that has 1000 elements in it. Since I imported the numpy module, I can just find the average money value with “mean(money)”. That’s not too bad. Right?


But did I every have 100 flips where I lost money? Here I added another variable called “lost”. When I flip the coin 100 times, I get the amount of money won. If this value is less than 0 dollars, then I add 1 to this lost counter. At the end, I print it. Of course, out of 1000 bets I probably won’t lose. Run it yourself and see what happens.


Derek claims that the average amount you win is $500 – do you get something close to that? What if you change the win and lose amounts to -10 and 15 dollars? What happens then? Maybe you should try that.


Making a Histogram


Finding the average amount of money won (or lost) is cool and fun – but it doesn’t tell the whole story. Really, humans don’t want just the average. They want to see the whole distribution of the data. Sure, you could print out all 1,000 results of the betting but that wouldn’t be easy to read.


The best way to represent this data is with a histogram (btw, Plotly’s introduction to histograms is simply the best). If you need a tutorial on histograms with python and Plotly, check this out.


See. Doesn’t that look nice? Here is the code that I used to make this program (you will need your own plotly login and key though).


Ok, clearly there is going to be homework.



  • Create a plot that shows the average amount of money won vs. the amount you could win. So, on the horizontal axis should be the amount of money won per flip (start at 10 dollars and assume you lose 10 dollars) and on the vertical axis should be the average amount won after 100 flips.

  • Create a plot of the average number of loses vs. the number of flips (use win 20 and lose 10).

  • Derek claims the risk of losing money is 1 out of 2,300. Run the 100 flip bet 10,000 times and see how many times you would lose money.


You won’t learn how to make histograms and numerical calculations unless you practice. Oh, and as Derek says – this is really a lesson about life. You need to take some risks to make some gains.



New Documentary Follows YouTube’s Biggest Minecraft Superstars


Notch made a lot of money making Minecraft, enough to outbid Beyoncé on a $70 million mansion. But even some of his game’s players are getting rich and famous.


A new documentary, Minecraft: Into the Nether, will look at the lives of Minecraft streamers and Let’s Players whose videos featuring them playing the global phenomenon building game have earned them millions of subscribers and billions of video views.


Along the way, the documentary will explore the rise of the huge fan communities and gala events that have sprung up around this unassuming experimental game.


Minecraft: Into the Nether will be released via iTunes and other view-on-demand services on January 27.



It Doesn’t Really Matter if ISIS Sympathizers Hacked Central Command’s Twitter


isis-header

Screenshot: WIRED



For 40 minutes yesterday, followers of the most feared terrorist organization in the world had free reign of a computer network of the US military. That is the story that many will take away from the hack of CENTCOM’s Twitter and YouTube accounts. And that story will be hyperbole.


First off all, whether the group ad-Dawlah al-Islāmīyah fil ‘Irāq wa ash-Shām, aka “Islamic State” or ISIS, was actually behind the hack of US Central Command’s social media accounts remains to be seen. More importantly, seizing control of those accounts is the equivalent of controlling a social media megaphone, but not the actual networks that matter to military operations. The networks are civilian controlled and hosted, not Pentagon owned or run. No critical command and control networks were touched, nor, for that matter, were any of the military’s internal or external computer networks that are used to move classified or even run-of-the-mill information.



P. W. Singer


Peter W. Singer is the founder of NeoLuddite, a technology advisory firm.




What was posted on the feeds was also not that significant or expert, which is all the more notable when one compares it against ISIS’s previous social media campaigns, which were highly professional with pre-planned hashtag tie-ins, catchy screengrabs, and distribution across language barriers. (ISIS runs social media feeds in at least 23 different languages.) Frankly, given the immense opportunity the hackers had holding such an impressive mic for around 40 minutes, the content itself was rather disappointing. Indeed, that one of the first tweets sent out was a picture of a goat points to this hack being more about shits and giggles than any highly organized campaign of attack linked to the broader ISIS network.


Another indicator was the account cover image being changed to the text “i love you isis.” ISIS is group that tends not to call itself by that acronym (a Western construct that limits it’s scope to two state borders “islamic State of Iraq and Syria,” rather than a wider, borderless caliphate they vision aims for). Nor is ISIS a group that tends to use the verb “love” all that much.


Other messages were posed as as if they were exposing secret US military war plans for China or Korea, but a closer look showed them actually to be PowerPoint slides with MIT’s Lincoln Lab logo on the side. Another message claimed to provide ISIS followers anywhere the ability to hunt down US soldiers at home; “AMERICAN SOLDIERS, WE ARE COMING, WATCH YOUR BACK,” one tweet said. But the documents attached turned out to be publicly available information dating back to 2012. For instance, knowing the snail-mail address of the Pentagon or the name of the commander of Fort Bragg is something an attacker could have gotten elsewhere and, of course, is not the same as being able to do something about it in real terms. On the Youtube side, only two supposed ISIS propaganda videos were posted, neither of which drew attention to the group in the manner of its more widely seen videos already online. To put it another way, in seizing the medium, they didn’t do much with the message.


At the end of the day, the result of the hack was lots of chatter, but little change in the real world. In other words, the social media hack was a lot like social media itself.


But don’t take that to mean that the event doesn’t matter. Just as social media plays a role in shaping real wars, politics, and business today, so does a hack like this signify more than bluster. This hack was highly embarrassing for US Central Command, all the more so for taking place at the very moment the President was speaking on the importance of cybersecurity. In addition, the likely low-level way the hackers got in is very embarrassing and likely consequential to whoever had the keys to the CENTCOM accounts (one imagines them now awaiting reassignment to the Arctic).


Incidentally, it’s a good reminder to turn on two-factor authentication for all your accounts. But I digress.


Embarrassing moments on social media happen all the time and may not have the same heft as attacks involving bombs and guns, but they do have importance. The meme of a big organization being shown up by the little guy is powerful, and resonates all the more when that organization is the US military, which has already spent $1.2 billion in the fight against ISIS. Yes this hack was mere “cybervandalism” as a CENTCOM spokesperson after the attack (not via Twitter, notably) and was likely unlinked to ISIS’s central core in its planning, organization or execution. But the hack is still a valuable propaganda moment to the group. At the same time it has suffered a series of setbacks on the physical battlefields of Iraq, the ISIS flag got waved in a medium that more people in the West both notice and care about—the social media environment.


Moreover, the value also matters in a different kind of fight: ISIS’s fight with its competitors in the terrorism game for funding, recruits, and attention. ISIS is a rival to al Qaida and rose to prominence as much due to its savvy use of social media as by its actual operations on the ground. Akin to the Sydney siege or the Paris attacks, even if there turns out not to be an actual ISIS role in the planning and organization of the event, the group’s cause still will get credit for it. It further stakes ISIS’s claim to prominence among the next generation of jihadis.


Like most hacks and terrorism writ large, the takeaway lessons here are the same as those that came before. We need to be better about security on all aspects, but we usually won’t until something goes wrong. Social media matters immensely, but also doesn’t. The threats are real, but can be weathered.


And, letting goats into your office is never a good idea.



Some Idiot Will Probably Try to Trademark #JeSuisCharlie. It Won’t Work


Participants of "Je suis Charlie " ("I am Charlie") movement in Paris, France.

Participants of “Je suis Charlie ” (“I am Charlie”) movement in Paris, France. Bernd von Jutrczenka/AP



One of the side effects of social media has been a rise in various forms of online activism. We typically encounter it in the form of a phrase or slogan—often attached to a hashtag—used as a message to create awareness, share information and organize action around a particular cause or issue.


As I write this, the viral rallying cry of the moment is Je Suis Charlie or “I am Charlie” as a message of “condolence, outrage and defiance” and a “show of support for free expression” over the deadly attack on French satirical magazine Charlie Hebdo.



Roberto Ledesma


Roberto Ledesma is a New York based attorney focused on trademark and intellectual property law. Previously, he was an Examining Attorney at the U.S. Patent & Trademark Office. His website is EverythingTrademarks.com.




Sadly, and inevitably it seems—as evidenced by recent trending rallying cries such as BOSTON STRONG, OCCUPY WALL STREET, HANDS UP DON’T SHOOT and I CAN’T BREATHE—someone will file a trademark application for JE SUIS CHARLIE in a misguided attempt to claim exclusive rights to the phrase.


I hope I’m wrong.


It generally takes about a week for trademark filings at the U.S. Patent & Trademark Office (USPTO) to appear in its public database. It is still too soon to know if anyone has filed another futile application.


I write this post on the off chance that anyone considering filing a trademark application for JE SUIS CHARLIE—or any future trending rallying cries—finds it, reads it and reconsiders. Here’s why:


The USPTO will refuse your application.

You will not get your money back.

You may be publicly ridiculed.

So don’t even try. It’s as simple as that.


Let me explain.


Trademarks are source identifiers. They point to a single source for certain goods and/or services. Common and popular rallying cries fail to function as trademarks because the public does not identify them with a single source. Instead, the public views them as conveying an informational message about the cause or issue being addressed.


To illustrate, here are several recent USPTO trademark applications that were refused on the ground that the marks were merely informational slogans:



  • OCCUPY WALL STREET—refused as merely informational matter that refers to a nationally recognized political movement

  • BOSTON STRONG—use of the slogan has become so widespread and ubiquitous consumers will not perceive it as a trademark

  • HANDS UP DON’T SHOOT—“consumers are accustomed to seeing this slogan commonly used in everyday speech by many different sources, the public will not perceive the slogan as a trademark that identifies the source of applicant’s goods but rather only as conveying an informational message”


Trademark Manual of Examining Procedure (TMEP) § 1202.04 guides the Examining Attorneys’ refusals, which cite §§1, 2, and 45 of the Lanham Act (or §§1, 2, 3, and 45 for services) as the statutory basis for refusal.


Applicants must also realize that there is no right to confidentiality in the information disclosed in a USPTO trademark application, including your name and address. The press may come calling. You may get trolled on social media. You will upset masses of people who feel the phrase is part of something much larger than any single person or entity’s business ambitions. They will be repulsed by your actions. I wonder how the woman who filed the I CAN’T BREATHE trademark application feels about all the negative attention she has received.


Eventually, when it is reviewed, her I CAN’T BREATHE application will be rejected on the same ground the slogans listed above were refused (so will the later-filed #ICANTBREATHE application)—regardless of what the University of Chicago law professor says in this article:


University of Chicago law professor Jonathan Masur said if Crump was not the first person to produce “I Can’t Breathe” T-shirts and hoodies, she will not receive the trademark, even if she was the first to apply and regardless of whether anyone else files.



“If she’s not the first person to make these T-shirts, she’s going to be out of luck,” Masur said. “If someone successfully trademarks the phrase “I Can’t Breathe” for clothing, it’s conceivable it could be worth a considerable amount of money. They could make a tidy sum.”



Inconceivable!


And wrong, on so many levels. The only thing that’s conceivable is this kind of thinking got us here in the first place. It bothers me that this ridiculous quote, from a law professor at an esteemed university, was picked up by most major media outlets in reporting on the story. It will only lead to more dead on arrival copycat filings.


Understand that it does not matter if you were the first to market and sell goods under a commonly used rallying cry. For the reasons detailed above, your trademark application at the USPTO will NOT be approved. Plus, there is no “tidy sum” in developing a brand around monopolizing the name for a social movement (be it a dying man’s last words or a phrase now synonymous with free expression). Remember the ICE BUCKET CHALLENGE fiasco? It’s a distasteful approach to branding and there is no profit in it. You will lose more than you will gain—and I mean that beyond mere dollars.


But wait, there’s more!


Since the “Charlie” in “Je Suis Charlie” is a reference to French magazine Charlie Hebdo, a USPTO Examiner should also issue a false suggestion of connection refusal under Section 2(a) of the Lanham Act.


We have seen similar 2(a) refusals in the following applications:



  • JUSTICE 4 TRAYVON: Refused because mark consists of or includes matter which may falsely suggest a connection with the estate of the recently deceased Trayvon Martin

  • MH17: The mark is a close approximation of the recent tragedy involving Malaysia Airlines Flight 17. “The term at issue need not be the actual, legal name of the party falsely associated with applicant’s mark to be unregistrable under Section 2(a).”

  • LINSANITY: “The term LINSANITY has been so widely used in connection with Jeremy Lin that consumers would presume a connection to Mr. Lin if they see the term LINSANITY on the identified goods.” See TMEP Section 1203.03(c).


Notably, the USPTO allowed Trayvon Martin’s mother to register I AM TRAYVON (the registration was transferred to a foundation established by his parents). Similarly, LET’S ROLL was registered by the Todd M. Beamer Memorial Foundation in the wake of 9/11. That registration has since lapsed and is now cancelled. The USPTO has granted approval where there is a direct connection between the origin of, or reference in, the slogan and the applicant.


For everyone else though, there is no point in filing an application to register a trademark on the latest celeb/sports catchphrase du jour or the trending tragedy or common cause of the moment. Sure, you may get away with ornamental use on t-shirts, hats and other novelties, as many will do, and maybe a professional sports team will buy your loot and wear it on TV! All in the name of social awareness.


Maybe.


But you still won’t own any enforceable trademark rights. So just drop the notion you have of trying to capitalize on a popular rallying cry or trendy slogan by seeking federal trademark rights. It’s a bad idea.


Disclaimer: This is a blog post, which first appeared on my web site. It is NOT legal advice, but if you contact me for advice and retain me, I’ll tell you the same thing I’ve written above or just save myself the spiel and direct you to this post.



What Michael Mann Did to Get the Hackers in Blackhat Right


Joe Pugliese


Since Michael Mann’s 1980s days—directing James Caan in Thief , or producing/directing Miami Vice —he’s earned a reputation as the guy who makes sleek films that happen to be smart, or maybe just smart films that happen to be sleek. Either way, his love of cityscapes and of moral ambiguity have combined for a signature style that’s as compelling as it is technically proficient: gritty, atmospheric establishing shots paired with intricately crosscut sequences that turn subtext into art. For his latest, January’s Blackhat, the master of palette trains his lens (and pen) on international hackers—and our growing fears of cybercrime. WIRED sat down with the 71-year-old writer-director at his LA office to find out how he manages to never go out of style.


So what made you want to do a hacker film?


I became interested in hacking after Stuxnet. It’s as if there’s an invisible exoskeleton of data that we’re swimming around in, and everything is connected to everything else. The technocrats think every little can opener is going to change the way we live our lives, which is nonsense. This is the first piece of serious technology that changes the way we live our lives—it’s as democratizing as the printing press.


Hacking does seem more serious.


If you are a blackhat hacker, why do you do what you do? There’s very much a “Who says I can’t climb that mountain, who says I can’t find a vulnerability?” thing. There’s a positive-feedback loop. You could spend six hours gaming and it feels like 20 minutes went by, but the outcome is in a domain of fantasy. Blackhat hackers have exactly the opposite motivation. The outcome is impacting the real physical world.


There seems to be a bit of that in the recent celebrity hacking scandal—finding the forbidden.


That, to me, is some bizarre form of voyeurism. It’s boring. I’m much more interested in revelations about Russian hackers.



In Blackhat, Chris Hemsworth plays a hacker helping authorities catch a fellow blackhat. FRANK CONNOR/LEGENDARY PICTURES/UNIVERSAL PICTURES



Your movies typically spend a lot of time with bad guys. Are hackers your new gangsters?


I don’t know what gangsters are, but I do know what cybercrime is. Whether it’s coming from Estonia or Russia or the Ukraine or China or Taiwan or Mumbai, it’s about making a lot of money. It’s very, very sophisticated.


Yet, hacker culture hasn’t gotten the most accurate treatment in Hollywood.


I was a pupil in this stuff. We spent time with Mike Rogers, who was head of the House Intelligence Committee. In the movie, the things people are typing and the text you see are all the real thing.


Did you go back and look at any other movies about hacking, for the visual style?


Watching people type is boring. And I didn’t want to represent the inside of a chip as being a guy on a motorcycle on a bridge. I wanted to represent, as realistically as possible, the idea that a data packet is going in with an address that says, “I’m OK, let me through your firewall,” but hidden within it is a tool that can open up a back door. The sequence goes inside the computer and uses the actual shape of a transistor: one piece of conductive metal that has a surplus of electrons, and one with no electrons. The one license we took is we made them be two different colors.


You put a lot of detail into the music of your films as well. What was the process on Blackhat?


I worked with a number of composers on this one because I wanted different things from different people. It’s like casting actors. The film is an adventure of a narrative, and the story changes radically about three times, so there are very different conditions emotionally. That changes the music. So Atticus Ross does one thing, Harry Gregson-Williams does another thing, Ryan Amon does another thing, and then Mike Dean does something else. Mike Dean is Kanye West’s keyboardist and producer—a very talented guy.



Mann’s films often feel like a love letter to a single city. Joe Pugliese


You’ve worked frequently with actors who want meatier roles, like Will Smith in Ali . Do you see Chris Hemsworth that way? As more than Thor?


Ron Howard showed me about 45 minutes of Rush while he was editing it, and that’s when I decided this guy’s a real actor. I only work with people who are really serious. Why would you not want to dive into the deep end of the pool?


You were an early adopter of shooting digitally. But when Collateral came out, most theaters were still showing prints, and you thought the quality got lost.


It’s gotten better; I’m glad digital is around. But sometimes I may want a photochemical look. It’s conditioning; those visual artifacts make us feel a certain way. In Public Enemies , I wanted to bring you into the interior of a world and have it look like it really looked—as crisp and edgy as it would have been in real life. I didn’t want to look at it through the lens of nostalgia. But if I was doing Last of the Mohicans again, I would probably do that on film.


Somewhat similarly, there’s the way cities are moving from incandescent lighting to LED for streetlamps and infrastructure. LA’s already done it, and New York is under way. You’re known for the way you film urban landscapes—how would that affect your style?


LED might be a bit harsh, a bit blue, but the only thing I’m going to be nostalgic for is that glow you get when the marine layer comes in at night and all the sodium vapors bounce off the other side of the clouds.


“I WANTED THE THRILL OF THE CHARACTERS FEELING LIKE STRANGERS IN A STRANGE LAND. AND THAT BECAME JAKARTA.”


Your films often feel like love letters to a single city, whether it’s LA or Miami. But Blackhat involves some globe-hopping: Kuala Lumpur, Jakarta …


I wanted the thrill of the characters feeling very much like strangers in a strange land. And that became Jakarta, which has 10 million people at night and 20 million in the day. How different the urban landscape is, that’s what’s exciting. Kind of messy. I grew up in Chicago, and it reminds me of Chicago.


It seems as though directors from Chicago are particularly keen on shooting urban environments.


No, just directors who grew up in the city. The directors who grew up in the suburbs make comedies. That’s the rule.


But you seem to have a singular appreciation for a beautiful city.


There’s a romance to those black streets at night, wet and rainy, and the El tracks above them. Or the really cold winter days when there’s not a cloud in the sky—dry and clean. It’s great.


ANGELA WATERCUTTER (@waterslicer) interviewed Nick Frost, Edgar Wright, and Simon Pegg in issue 21.08.



How Intel Gave Stephen Hawking a Voice


Stephen Hawking

Marco Grob/WIRED UK



Stephen Hawking first met Gordon Moore, the cofounder of Intel, at a conference in 1997. Moore noticed that Hawking’s computer, which he used to communicate, had an AMD processor and asked him if he preferred instead a “real computer” with an Intel micro-processor. Intel has been providing Hawking with customized PCs and technical support since then, replacing his computer every two years.

Hawking lost his ability to speak in 1985, when, on a trip to CERN in Geneva, he caught pneumonia. In the hospital, he was put on a ventilator. His condition was critical. The doctors asked Hawking’s then-wife, Jane, whether they should turn off the life support. She vehemently refused. Hawking was flown to Addenbrooke’s Hospital, in Cambridge, where the doctors managed to contain the infection. To help him breathe, they also performed a tracheotomy, which involved cutting a hole in his neck and placing a tube into his windpipe. As a result, Hawking irreversibly lost the ability to speak.


For a while, Hawking communicated using a spelling card, patiently indicating letters and forming words with a lift of his eyebrows. Martin King, a physicist who had been working with Hawking on a new communication system, contacted a California-based company called Words Plus, whose computer program Equalizer allowed the user to select words and commands on a computer using a hand clicker. King spoke to the CEO of Words Plus, Walter Woltosz, and asked if the software could help a physics professor in England with ALS. Woltosz had created an earlier version of Equalizer to help his mother-in-law, who also suffered from ALS and had lost her ability to speak and write. “I asked if it was Stephen Hawking, but he couldn’t give me a name without permission,” says Woltosz. “He called me the next day and confirmed it. I said I would donate whatever was needed.”


Equalizer first ran on an Apple II computer linked to a speech synthesizer made by a company called Speech Plus. This system was then adapted by David Mason, the engineer husband of one of Hawking’s nurses, to a portable system that could be mounted on one of the arms of a wheelchair. With this new system, Hawking was able to communicate at a rate of 15 words per minute.


However, the nerve that allowed him to move his thumbs kept degrading. By 2008, Hawking’s hand was too weak to use the clicker. His graduate assistant at the time then devised a switching device called the “cheek switch.” Attached to his glasses, it could detect, via a low infrared beam, when Hawking tensed his cheek muscle. Since then, Hawking has achieved the feat of writing emails, browsing the internet, writing books and speaking using only one muscle. Nevertheless, his ability to communicate continued to decline. By 2011, he managed only about one or two words per minute, so he sent a letter to Moore, saying: “My speech input is very, very slow these days. Is there any way Intel could help?”


Moore asked Justin Rattner, then Intel’s CTO, to look into the problem. Rattner assembled a team of experts on human-computer interaction from Intel Labs, which he brought over to Cambridge for Hawking’s 70th birthday conference, “The State of the Universe,” on January 8, 2012. “I brought a group of specialists with me from Intel Labs,” Rattner told the audience. “We’re going to be looking carefully at applying some state-of-the-art computing technology to improve Stephen’s communicating speed. We hope that this team has a breakthrough and identifies a technique that allows him to communicate at levels he had a few years ago.”


Stephen Hawking in Chicago, 1986.

Stephen Hawking in Chicago, 1986. AP



Hawking had been too ill to attend his own birthday party, so he met the Intel experts some weeks later at his office in the department of applied mathematics and theoretical physics at the University of Cambridge. The team of five included Horst Haussecker, the director of the Experience Technology Lab, Lama Nachman, the director of the Anticipatory Computing Lab and project head, and Pete Denman, an interaction designer. “Stephen has always been inspirational to me,” says Denman, who also uses a wheelchair. “After I broke my neck and became paralyzed, my mother gave me a copy of A Brief History of Time, which had just come out. She told me that people in wheelchairs can still do amazing things. Looking back, I realize how prophetic that was.”

After the Intel team introduced themselves, Haussecker took the lead, explaining why they were there and what their plans were. Haussecker continued speaking for 20 minutes, when, suddenly, Hawking spoke.


“He welcomed us and expressed how happy he was that we were there,” says Denman. “Unbeknown to us, he had been typing all that time. It took him 20 minutes to write a salutation of about 30 words. It stopped us all in our tracks. It was poignant. We now realized that this was going to be a much bigger problem than we thought.”


At the time, Hawking’s computer interface was a program called EZ Keys, an upgrade from the previous softwares and also designed by Words Plus. It provided him with a keyboard on the screen and a basic word-prediction algorithm. A cursor automatically scanned across the keyboard by row or by column and he could select a character by moving his cheek to stop the cursor. EZ Keys also allowed Hawking to control the mouse in Windows and operate other applications in his computer. He surfed the web with Firefox and wrote his lectures using Notepad. He also had a webcam that he used with Skype.


The Intel team envisaged an upheaval of Hawking’s archaic system, which would involve introducing new hardware. “Justin was thinking that we could use technology such as facial-gesture recognition, gaze tracking and brain-computer interfaces,” says Nachman. “Initially we fed him a lot of these wild ideas and tried a lot of off-the-shelf technologies.” Those attempts, more often than not, failed. Gaze tracking couldn’t lock on to Hawking’s gaze, because of the drooping of his eyelids. Before the Intel project, Hawking had tested EEG caps that could read his brainwaves and potentially transmit commands to his computer. Somehow, they couldn’t get a strong enough brain signal. “We would flash letters on the screen and it would try to select the right letter just by registering the brain’s response,” says Wood. “It worked fine with me, then Stephen tried it and it didn’t work well. They weren’t able to get a strong enough signal-to-noise.”


“The more we observed him and listened to his concerns, the more it dawned on us that what he was really asking, in addition to improving how fast he could communicate, was for new features that would let him interact better with his computer,” says Nachman. After returning to Intel Labs and after months of research, Denman prepared a 10-minute video to send to Hawking, delineating which new user-interface prototypes they wanted to implement and soliciting his feedback. “We came up with changes we felt would not drastically change how he used his system, but would still have a large impact,” says Denman. The changes included additions such as a “back button,” which Hawking could use not only to delete characters but to navigate a step back in his user interface; a predictive-word algorithm; and next-word navigation, which would let him choose words one after another rather than typing them.


The main change, in Denman’s view, was a prototype that tackled the biggest problem that Hawking had with his user interface: missed key-hits. “Stephen would often hit the wrong key by hitting the letter adjacent to the one he wanted,” says Denman. “He would miss the letter, go back, miss the letter again, go back. It was unbearably slow and he would get frustrated.” That particular problem was compounded by Hawking’s perfectionism. “It’s really important for him to have his thoughts articulated in exactly the right way and for the punctuation to be absolutely right,” says Nachman. “He learned to be patient enough to still be able to be a perfectionist. He’s not somebody who just wants to get the gist of the message across. He’s somebody who really wants it to be perfect.”


To address the missed key-hits, the Intel team added a prototype that would interpret Hawking’s intentions, rather than his actual input, using an algorithm similar to that used in word processing and mobile phones. “This is a tough interaction to put your faith into,” the video explained. “When the iPhone first entered the market, people complained about predictive text but quickly distrust turned to delight. The problem is that it takes a little time to get used to and you have to release control to let the system do the work. The addition of this feature could increase your speed and let you concentrate on content.”


The video concluded: “What’s your level of excitement or apprehension?” In June that year, Hawking visited Intel Labs, where Denman and his team introduced him to the new system, initially called ASTER (for ASsistive Text EditoR). “Your current piece of software is a little dated,” Denman told him. “Well, it’s very dated, but you’re very used to using it, so we’ve changed the method by which your next-word prediction works and it can pretty much pick up the correct word every single time, even if you’re letters away from it.”


“This is a big improvement over the previous version,” Hawking replied. “I really like it.”


They implemented the new user interface on Hawking’s computer. Denman thought they were on the right path. By September, they began to get feedback: Hawking wasn’t adapting to the new system. It was too complicated. Prototypes such as the back button, and the one addressing “missed key-hits,” proved confusing and had to be scrapped. “He’s one of the brightest guys in the world but we can’t forget that he hasn’t been exposed to modern technology,” says Denman. “He never had the opportunity to use an iPhone. We were trying to teach the world’s most famous and smartest 72-year-old grandfather to learn this new way of interacting with technology.”


Computer and speech synthesiser housing used by Stephen Hawking, 1999.

Computer and speech synthesiser housing used by Stephen Hawking, 1999. Science Museum Photo Studio/Getty Images



Denman and the rest of the team realized that they had to start thinking differently about the problem. “We thought we were designing software in the traditional sense, where you throw out a huge net and try to catch as many fish as you can,” says Denman. “We didn’t realize how much the design would hinge on Stephen. We had to point a laser to study one individual.”

At the end of 2012, the Intel team set up a system that recorded how Hawking interacted with his computer. They recorded tens of hours of video that encompassed a range of different situations: Stephen typing, Stephen typing when tired, Stephen using the mouse, Stephen trying to get a window at just the right size. “I watched the footage over and over,” says Denman.


“Sometimes, I would run it at four times the speed and still find something new.”


By September 2013, now with the assistance of Jonathan Wood, Hawking’s graduate assistant, they implemented another iteration of the user interface in Hawking’s computer. “I thought we had it, I thought we were done,” says Denman. However, by the following month, it became clear that, again, Hawking was having trouble adapting. “One of his assistants called it ‘ASTER’ torture,” recalls Denman. “When they said it, Stephen would grin.”


It was many more months before the Intel team came up with a version that pleased Hawking. For instance, Hawking now uses an adaptive word predictor from London startup SwiftKey which allows him to select a word after typing a letter, whereas Hawking’s previous system required him to navigate to the bottom of his user interface and select a word from a list. “His word-prediction system was very old,” says Nachman. “The new system is much faster and efficient, but we had to train Stephen to use it. In the beginning he was complaining about it, and only later I realized why: He already knew which words his previous systems would predict. He was used to predicting his own word predictor.” Intel worked with SwiftKey, incorporating many of Hawking’s documents into the system, so that, in some cases, he no longer needs to type a character before the predictor guesses the word based on context. “The phrase ‘the black hole’ doesn’t require any typing,” says Nachman. “Selecting ‘the’ automatically predicts ‘black’. Selecting ‘black’ automatically predicts ‘hole’.”


The new version of Hawking’s user interface (now called ACAT, after Assistive Contextually Aware Toolkit) includes contextual menus that provide Hawking with various shortcuts to speak, search or email; and a new lecture manager, which gives him control over the timing of his delivery during talks. It also has a mute button, a curious feature that allows Hawking to turn off his speech synthesizer. “Because he operates his switch with his cheek, if he’s eating or traveling, he creates random output,” says Wood. “But there are times when he does like to come up with random speech. He does it all the time and sometimes it’s totally inappropriate. I remember once he randomly typed ‘x x x x’, which, via his speech synthesizer, sounded like ‘sex sex sex sex’.”


Wood’s office is next to Hawking’s. It’s more of a workshop than a study. One wall is heaped with electronic hardware and experimental prototypes. Mounted on the desk is a camera, part of an ongoing project with Intel. “The idea is to have a camera pointed at Stephen’s face to pick up not just his cheek movements but other facial movements,” says Wood. “He could move his jaw sideways, up and down, and drive a mouse and even potentially drive his wheelchair. These are cool ideas but they won’t be coming to completion any time soon.”


Another experimental project, suggested by the manufacturers of Hawking’s wheelchair earlier this year, is a joystick that attaches to Hawking’s chin and allows him to navigate his wheelchair independently. “It’s something that Stephen is very keen on,” says Wood. “The issue was the contact between Stephen’s chin and the joystick. Because he doesn’t have neck movement it is difficult to engage and disengage the joystick.” Wood shows WIRED a video of a recent test trial of this system. In it, you can see Hawking driving his wheelchair across an empty room, in fits and starts. “As you can see, he managed to drive it,” says Wood. “Well, sort of.”


Wood showed WIRED a little grey box, which contained the only copy of Hawking’s speech synthesizer. It’s a CallText 5010, a model given to Hawking in 1988 when he visited the company that manufactured it, Speech Plus. The card inside the synthesizer contains a processor that turns text into speech, a device that was also used for automated telephone answering systems in the 1980s.


“I’m trying to make a software version of Stephen’s voice so that we don’t have to rely on these old hardware cards,” says Wood. To do that, he had to track down the original Speech Plus team. In 1990, Speech Plus was sold to Centigram Communications. Centigram was acquired by Lernout and Hauspie Speech Products, which was acquired by ScanSoft in 2001. ScanSoft was bought by Nuance Communications, a multinational with 35 offices and 1,200 employees. Wood contacted it. “They had software with Stephen’s voice from 1986,” says Wood. “It looks like we may have found it on a backup tape at Nuance.”


Hawking is very attached to his voice: in 1988, when Speech Plus gave him the new synthesizer, the voice was different so he asked them to replace it with the original. His voice had been created in the early ’80s by MIT engineer Dennis Klatt, a pioneer of text-to-speech algorithms. He invented the DECtalk, one of the first devices to translate text into speech. He initially made three voices, from recordings of his wife, daughter and himself. The female’s voice was called “Beautiful Betty”, the child’s “Kit the Kid”, and the male voice, based on his own, “Perfect Paul.” “Perfect Paul” is Hawking’s voice.


This story was first published in WIRED UK issue 01.15



A Startup Just Got $30 Million to Shake Up the Garbage Industry


A garbage collector empties a residential garbage bin into his truck in Seattle, Dec. 22, 2014.

A garbage collector empties a residential garbage bin into his truck in Seattle, Dec. 22, 2014. Elaine Thompson/AP



Millions of businesses are paying billions of dollars in rent on their garbage. They don’t think of it that way, of course, just as the fees they pay trash haulers to pick up their junk. But a significant portion of that money covers the cost of the landfill space itself. And what is a landfill if not a stinky, seething plot of real estate with garbage as the primary tenant?

It’s wasteful, sure. But what’s worse is the fact that this lucrative little arrangement gives the trash haulers who own those landfills very little incentive to recycle when those garbage heaps are practically minting money.


Nate Morris, CEO and co-founder of Rubicon Global, says his company is trying a different approach. It doesn’t own any landfills, or garbage trucks for that matter. Instead, its sole purpose is to help businesses cut their garbage costs and maximize the amount of waste being diverted from landfills. This strategy has earned Rubicon large contracts across the country with the likes of 7-Eleven and Wegman’s, but the company’s national footprint is set to double in the coming months. Today, Rubicon announced it has raised $30 million, which it will use to scale operations across the country and invest in new recycling technology research.


This vote of confidence from investors — including the likes of SalesForce founder and CEO Marc Benioff — is all the more noteworthy, considering what Rubicon is up against. Our nation’s trash is controlled by two multibillion dollar businesses, Waste Management and Republic Services. By any measure at all, Rubicon is tiny in comparison. But it’s gaining ground with businesses by appealing not only to their sense of environmental ethics, but to their bank accounts.


Less Waste, More Money


Founded by Morris and Lane Moore in 2008, Rubicon has created a virtual marketplace where thousands of small, local haulers can bid on portions of huge national contracts. This fosters competition between haulers, driving down the price of service. Rubicon also monitors the ebb and flow of their waste stream to cut down on unnecessary pickups. When Rubicon saves customers money, it takes a cut of those savings. Then, it catalogs the waste and scours its extensive database of recycling opportunities to find ways to resell the often valuable materials that get locked up in that waste. Again, the more Rubicon can sell off, the more it gets paid.


“The disruption of our model really exists around our revenue structure,” Morris says. “We’re making sure all the incentives align for the first time in the waste industry.” In other words, unlike traditional haulers, Rubicon is most successful when it keeps trash out of landfills, not in it.


According to Eric Orts, a professor at the Wharton School of Business and faculty director of the Initiative for Global Environmental Leadership, it’s this realignment that will be crucial to influencing change. “You have an expanding population on the planet using more and more stuff. At some point you can’t follow the old model anymore,” he says. “It’s about how we redefine the whole process so you don’t want to do that anymore.”


Rubicon, which is a member of the Initiative’s corporate advisory board, does just that, Orts says. “It’s a no-brainer, if it costs less to reduce your waste. That’s a business case,” he says.


Garbage R&D


The success of Rubicon’s system depends on a software platform developed by the company. Nicknamed Caesar, the platform is the hub that makes sense of all the data on haulers, clients, and recycling possibilities. When new clients come online, Rubicon analyzes the waste stream—which involves a combination of dumpster diving and sensors—and catalogs it in Caesar. The system then automatically surfaces a list of ways that waste can be recycled in a given geographic region. The more waste in the system, the better Caesar gets at making those connections. Caesar is also where haulers bid on contracts and where clients can monitor their waste data. It’s the data, Morris says, that may be the most valuable resource of all.


“We believe the data we have today will ultimately help us communicate back to customers how to make better choices related to their supply chain,” he says.


Meanwhile, Morris says a large portion of the money it just raised will fund the company’s Rubicon X division, the research and development lab where the company tests new recycling technology. The goal is not just to find the recycling opportunities that exist today, but to create the ones that don’t. Already, Rubicon X is testing things like dumpster mounted cameras that enable the company to monitor the waste stream virtually and sensors that let them know when trash has been picked up.


“This is where we’re going to begin experimenting with new and alternative technologies that many other businesses have shied away from, because of their reliance on landfills,” Morris says. “We’ll experiment with how we can make waste obsolete.”


In many ways, Rubicon is the prototypical tech startup of its time. Like Uber and Airbnb before it, it’s taken technology to a decidedly low-tech industry, with the full-throated belief that the efficiency that technology offers has the power to topple even the most monolithic incumbents. But businesses are more intransigent than consumers, and Orts says it may take years, and if not decades, to change the status quo in the industry. “You don’t get to change the world overnight,” he says. “But you do what you can, and companies will grow up over time if they’ve got good ideas that people will pay for.”