Federal Cybersecurity Director Found Guilty on Child Porn Charges

The U.S. Department of Health and Human Services building is shown August 16, 2006 in Washington, DC.

The U.S. Department of Health and Human Services building is shown August 16, 2006 in Washington, DC. Mark Wilson / Getty

As the acting cybersecurity chief of a federal agency, Timothy DeFoggi should have been well versed in the digital footprints users leave behind online when they visit web sites and download images.

But DeFoggi—convicted today in Maryland on three child porn charges including conspiracy to solicit and distribute child porn—must have believed his use of the Tor anonymizing network shielded him from federal investigators.

He’s the sixth suspect to make this mistake in Operation Torpedo, an FBI operation that targeted three Tor-based child porn sites and that used controversial methods to unmask anonymized users.

But DeFoggi’s conviction is perhaps more surprising than others owing to the fact that he worked at one time as the acting cybersecurity director of the U.S. Department of Health and Human Services. DeFoggi worked for the department from 2008 until January this year. A department official told Business Insider that DeFoggi worked in the office of the assistant secretary for administration as lead IT specialist but a government budget document for the department from this year (.pdf) identifies a Tim DeFoggi as head of OS IT security operations, reporting to the department’s chief information security officer.

The porn sites he’s accused of using—including one called PedoBook—were hosted on servers in Nebraska and run by Aaron McGrath, who has already been convicted for his role in the sites. The sites operated as Tor hidden services—sites that have special .onion URLs and that cannot normally be traced to the physical location where they are hosted.

Although anyone could use the sites, registered users like DeFoggi—who was known online under the user names “fuckchrist” and “PTasseater”—could set up profile pages with an avatar, often child porn images, and personal information and upload files. The site archived more than 100 videos and more than 17,000 child porn and child erotica images, many of them depicting infants and toddlers being sexually abused by adults.

The FBI seized the sites in late 2012, after McGrath failed to secure his administrative account with a password. Agents were able to log in and uncover the IP address of the Nebraska server where he was hosting two of them. McGrath worked at the server farm, and hosted the third site from his home. The FBI monitored him for a year and after arresting him in November 2012 continued to operate his child porn sites secretly from a federal facility in Omaha for several weeks before shutting them down. During this time, they monitored the private communications of DeFoggi and others and engaged in “various investigative techniques…to defeat the anonymous browsing technology afford by the Tor network” and identify the real IP addresses of users.

These techniques “successfully revealed the true IP addresses of approximately 25 domestic users who accessed the sites (a small handful of domestic suspects were identified through other means, and numerous foreign-based suspect IPs were also identified),” prosecutors wrote in a court document. In March 2013, twenty suspects were indicted in Nebraska; followed by two others who were indicted the following August.

One of these techniques involved the used drive-by downloads to infect the computers of anyone who visited McGrath’s web sites. The FBI has been using malicious downloads in this way since 2002, but focused on targeting users of Tor-based sites only in the last two years.

Tor is free software that lets users surf the web anonymously. Using the Tor browser, the traffic of users is encrypted and bounced through a network of computers hosted by volunteers around the world before it arrives at its destination, thus masking the IP address from which the visitor originates.

The malware that investigators installed remotely on the machines of visitors to PedoBook and McGrath’s other sites was designed to identify the computer’s IP address as well as its MAC address and other identifiers. The results were coordinated raids in April 2013 that swept up more than a dozen suspects.

DeFoggi became part of that sting after becoming a registered member of PedoBook in March 2012 where he remained active until December that year. During this time DeFoggi, who described himself as “having many perversions,” solicited child porn images from other members, viewed images and exchanged private messages with other members expressing interest in raping, beating and murdering infants and toddlers.

Among those with whom he corresponded was an FBI undercover employee. During chats DeFoggi described using Tor to access PedoBook early in the morning hours and between 4 and 6 pm. Among the evidence seized against him was pen register/trap trace data obtained from Verizon showing someone at his Maryland residence using Tor during these hours as well as the IP addresses used by an AOL account under the username “ptasseater,” which pointed to DeFoggi’s home.

When agents arrived at his home early one morning to execute a search warrant, they had to pry him from his laptop, which was in the process of downloading a child porn video from a Tor web site called OPVA, or Onion Pedo Video Archive. In addition to child porn images stored on his computer, authorities also found evidence of his Tor browser history, showing some of his activity at PedoBook and OPVA.

DeFoggi received many commendations during his government career, according to an exhibit list created by the government for his trial. The list includes several certificates of award from the U.S. Treasury, a certificate of appreciation from the State Department for his work on a Hurricane Katrina task force, several documents related to computer courses he attended and certifications he received.

DeFoggi is scheduled to be sentenced in November.

How to Shoot Hyperlapse Videos Without Making People Sick

Hyperlapse, Instagram’s new app, is designed to make your iPhone videos a lot more watchable. That’s how it’s supposed to work, anyway. The app isn’t even a day old, and the Internet is already full of super-zoomy, multi-axis clips that are basically concentrated, digital motion sickness.

Hyperlapse has a ton of potential for creativity, but let’s all master the basics before breaking the mold. Because in this case, the mold is a actually metaphor for not wanting to throw up, OK?

More than meets the UI

It seems so simple, but Hyperlapse is a deceptively complex app. It has both video-speed adjustments and an incredibly sophisticated digital stabilization system; those two features can help you capture two very different kinds of movies.

As its name suggests, Hyperlapse can capture videos with a sped-up time-lapse effect—up to 12 times normal speed. It’s handy for shooting your own Benny Hill chase sequences, capturing sunsets, and unlocking the mystery of what your cat does when you’re not in the room. But though Hyperlapse can eliminate small shakes, drastic changes of direction can look jarring at high playback speeds.

The second type of Hyperlapse video is less obvious: the tracking shot, in which you physically follow a moving subject with your camera. You’ve probably seen a lot of tracking shots, including the incredible ones in True Detective , The Shining , and Goodfellas (as well as its tribute scene in Swingers ). For a tracking shot, you’ll normally want to keep the playback speed on the low end of the slider. Hyperlapse’s stabilization is the star here; dig that smooth, smooth flow.

Doin’ it, and doin’ it, and doin’ it well

It’s helpful to think of time-lapse shots and tracking shots as complete opposites—at least as far as Hyperlapse is concerned. For tracking shots, the camera will be in motion, but you’ll want to keep the playback speed low to eliminate discomforting motion effects. If you’re following a toddler’s run of terror through the backyard and house like a Family Circus comic come to life, you just want the smooth footage to tell the story. You barely need to speed it up, if at all. (It also helps to try to move as smoothly and slowly as possible—the stabilization can only do so much.)

For videos that you’re planning on playing back at high speeds, try keeping the camera completely still. (You don’t want your eyes to feel like they’re on a roller coaster, while your semicircular canals disagree.) If you must move the camera, stick to one direction over the course of the video—like shooting out the window of a moving car. For more inspiration, the sample movies in Hyperlapse’s demo video are good starting points.

Also, remember that you are speeding up these videos when you shoot them. Three minutes of source footage sped up at 12x will represent just 15 seconds after it’s been fully Hyperlapsified. With that in mind, shoot longer than you’re accustomed to with a smartphone. You can import and edit videos of up to 45 minutes in Hyperlapse, which you can then save to your Video roll. You won’t be able to post videos longer than 15 seconds on Instagram, but you can post them elsewhere.

And even though Hyperlapse can smooth out slightly shaky video, it will crop the source footage a bit to do so. Because of the way Hyperlapse’s digital stabilization works, you’ll want to make sure important parts of your scene aren’t at the edges of the frame. The app keeps footage smooth by keeping a center portion of the frame steady, then cropping around the edges.

Breakin’ the law

Once you’ve mastered the art of creating smooth, sped-up videos that won’t make your viewers vomit, you may want to graduate to more complex projects. There are some excellent Hyperlapse videos out there that play around with different speeds in a single video, as well as some that manage to combine sped-up effects with camera movement effectively.

But until you know you’re an experimental Hyperlapse virtuoso, it’s a good idea to keep all those herky-jerky videos out of everyone’s Facebook feed. It’s been a rough day.

How Cops and Hackers Could Abuse California’s New Phone Kill-Switch Law


mattjeacock/Getty Images

Beginning next year, if you buy a cell phone in California that gets lost or stolen, you’ll have a built-in ability to remotely deactivate the phone under a new “kill switch” feature being mandated by California law—but the feature will make it easier for police and others to disable the phone as well, raising concerns among civil liberties groups about possible abuse.

The law, which takes effect next July, requires all phones sold in California to come pre-equipped with a software “kill switch” that allows owners to essentially render them useless if they’re lost or stolen. Although the law, SB 962, applies only to California, it undoubtedly will affect other states, which often follow the Golden State’s lead. It also seems unlikely phone manufacturers would exclude the feature from phones sold elsewhere. And although the legislation allows users to opt out of the feature after they buy the phone, few likely will do so.

The law raises concerns about how the switch might be used or abused, because it also provides law enforcement with the authority to use the feature to kill phones. And any feature accessible to consumers and law enforcement could be accessible to hackers, who might use it to randomly kill phones for kicks or revenge, or to perpetrators of crimes who might—depending on how the kill switch is implemented—be able to use it to prevent someone from calling for help.

“It’s great for the consumer, but it invites a lot of mischief,” says Hanni Fakhoury, staff attorney for the Electronic Frontier Foundation, which opposes the law. “You can imagine a domestic violence situation or a stalking context where someone kills [a victim's] phone and prevents them from calling the police or reporting abuse. It will not be a surprise when you see it being used this way.”

Apple, Blackberry, Google, Samsung and other tech firms were initially opposed to the bill, but dropped their opposition after law enforcement groups lobbied for it and after the bill was amended to, among other things, delay the date the law would go into effect and exempt tablets from the mandate.

The CTIA, a trade group for the telecommunications industry, continues to opposed it, however, calling the law “unnecessary” because other solutions already exist to address the problem of stolen phones, including stolen-phone databases and anti-theft applications that cell-phone owners can use.

More importantly, however, the EFF continues to oppose it on grounds that it could be abused by law enforcement. The organization has focused in particular on the law’s failure to specify who can activate the kill switch and how it may be abused by others.

In a letter sent to California legislators earlier this year, the organization cited a controversial incident in 2011 when transportation officials in the San Francisco Bay Area shut off wireless cell phone service during protests, stemming from the shooting of a young man by transit police, at BART stations. The incident prompted an amendment to California’s public utilities code to limit the circumstances under which law enforcement can sever communication services—an amendment that also would govern the use of the kill switch.

Under that amendment, law enforcement, or a communications provider acting at the behest of law enforcement, can interrupt service only by court order for the purpose of protecting public safety or preventing the use of that service for an illegal purpose. “The order shall clearly describe the specific communications service to be interrupted with sufficient detail as to customer, cell sector, central office, or geographical area affected, shall be narrowly tailored to the specific circumstances under which the order is made, and shall not interfere with more communication than is necessary to achieve the purposes of the order,” according to section 7908 of the California Public Utilities code.

Regardless of these limitations, the EFF argues that the kill switch law provides law enforcement agencies with not only the legal means but also the technical means to do something that previously would have been considered too invasive. And since the public utilities code that limits how authorities in this state can use the kill switch does not apply elsewhere, the same protections don’t exist outside the state.

“[T]he fact remains that the presence of such a mechanism in every phone by default would not be available but for the existence of the kill switch bill,” EFF wrote in its letter. “Within two years, we would have legitimized a process that was seen to be quite extreme. While users have the ability to opt-out of such a tool, it is widely known that default settings are rarely changed.”

Science’s Big Data Problem



Modern science seems to have data coming out of its ears. From genome sequencing machines capable of reading a human’s chromosomal DNA (about 1.5 gigabytes of data) in half an hour to particle accelerators like the Large Hadron Collider at CERN (which generates close to 100 terabytes of data a day), researchers are awash with information. Yet in this age of big data, science has a big problem: it is not doing nearly enough to encourage and enable the sharing, analysis and interpretation of the vast swatches of data that researchers are collecting.

Science is the archetypal empirical endeavour. The theoretical physicist and all-round entertainer Richard Feynman put it best: “It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with the experiment, it’s wrong.” This has been the founding principle of science since its earliest days. Without the painstaking astronomical observations of Tycho Brahe, a sixteenth-century Danish nobleman, Johannes Kepler would not have determined that the planets move in elliptical orbits and Isaac Newton would not have had the foundations on which to build his law of universal gravitation.

In the late nineteenth and early twentieth century, without the ingenious experiments of Albert Michelson and Edward Morley, in which they demonstrated the constancy of the speed of light and the absence of the putative ether (producing perhaps the most famous negative result of all time), Albert Einstein would have lacked a critical empirical basis for his special theory of relativity.

So praise to the data gatherers, sharers and analysers who are essential to the continued progress of science on which we all rely — and which too often we take for granted. But even as the rest of the society, from business and economics to journalism and art, wakes up to the power of big data, the world of research is, ironically, not doing nearly enough to embrace the power of information. A big-data mindset involves more than having a lot of petabytes on your hard drive, and science is falling short in three main areas.

Information Is Power First, the power of information increases when it is shared. To see this you only have to look at the transformational effects of the Internet and the World Wide Web, and before that of other information technologies, from moveable type to the telegraph. Yet scientists are curiously reluctant to share their research findings, even with each other. True, it happens in some fields, such as genomics and astronomy, but in many others, including molecular biology and chemistry, secrecy is the norm.

A few years ago I attended a round-table meeting at a large US research organisation. The topic of discussion was data sharing among scientists, and how to encourage and enable it. Yet resistance, even among the experienced and enlightened people present, was palpable. Because of the intrinsically collaborative endeavour in which they’re engaged, academic researchers are often assumed to be more collaborative and less proprietary than their business counterparts. But the opposite is often true: as an employee of a commercial organisation, I wouldn’t dream of claiming ownership of information and withholding it from my colleagues in the way that scientists routinely do.

I would keep it from my competitors though, and there’s the rub. Scientific credit accrues to the authors of influential journal papers, not the providers of data (or experimental samples or software algorithms or any number of other kinds of contributions that people can make to the research process). With credit comes access to the scarce resources that researchers naturally seek, namely funding and employment. So until you’ve secured publication in a top journal everyone else is a competitor. If institutions and funders were to give more credit to open sharing of research data, scientific progress would accelerate and we would all benefit.

The Science of Data A second, related problem is that we still tend to see data generation and analysis as merely a prelude to the real job of science, which is to generate insights and theories. In a sense this is a reasonable view — data unaccompanied by an explanatory theory is often useless. But as the historical examples above illustrate, we need observations and analyses too — theories without data are mere speculation, not science.

Yet even as the rest of the world embraces the concept of the data scientist, science itself has yet to catch up. It is not at all unusual for a researcher to spend a highly successful career specialising in the study of a single object, however well known or obscure: the heart, the fruit fly or the goldfish Mauthner neuron. Yet it remains highly unusual to specialise in the functional role of data gathering, analysis and display. This is exactly what data scientists do, and we need more of them in research. Unfortunately the organisational structures of science — from university departments and funding bodies to academic societies and journals — are siloed into subject areas, inhibiting functional specialisation that cuts across these traditional disciplinary boundaries.

Academics, employers and funders in particular should actively encourage functional specialists, especially data scientists, and ensure that it becomes no more unusual for a researcher to become an expert in data science than to specialise in dark matter or fullerenes.

Improved Tools Finally, researchers need improved tools for managing, interpreting and sharing their data. Today most of them have better software at home for handling their music or photo collections than they have in the laboratory for doing the same with their research data. The reasons why are not are hard to see: there are around 7 million researchers in the world, making them about 0.1% of the human population. From the point of view of a big software company they therefore represent a relatively insignificant niche. But economically and culturally they are disproportionately important and for all our sakes they deserve better.

This is an area in which commercial organisations (including Digital Science, the one I run,) have important roles to play. Worldwide spending on research is in the order of a trillion dollars. The scientific publishing industry alone is worth tens of billions of dollars a year, and I believe the scientific software industry will overtake it. There are many scientists, software developers and entrepreneurs who are striving to build better tools in order to enable the next wave of scientific discovery on which we all depend. In whatever way we can provide it, they deserve our support.

Dr. Timo Hannay is Managing Director at Digital Science. He’s on Twitter at @digitalsci and @timohannay.

Vox Day, Scientist [Pharyngula]

The anti-vaxxers are excited. A recent paper, Measles-mumps-rubella vaccination timing and autism among young african american boys: a reanalysis of CDC data, claims that there is evidence that vaccinations cause autism. Only one problem: it’s a crappy paper.

Orac has covered it to an Oracian level of detail, so let me give the short summary:

  • The author, Brian Hooker, is unqualified. He is trained as a chemical engineer, although he now has a position as a biologist in a nursing program at a Christian college.

  • The journal, Translational Neurodegeneration, is a new something-or-other with no reputation, in the BioMed Central stable of journals. It’s not clear if it’s legit or not — to its credit, it’s not one of those journals that levies large page charges or fees to publish, so maybe it’s OK (but you never know…there sure are a lot of flaky fly-by-night journals popping up). It is not to its credit that it published this paper.

  • Notice the title: it’s a reanalysis of CDC data. That means that they sucked in a bunch of previously published data and rejiggered it, searching for possibly significant correlations. You don’t get to do that. We’re not talking about a meta-analysis, in which multiple data sets are pooled, but taking one data set and dividing it down into smaller, finer subsets, and then doing statistics on these fragments to test hypotheses not made by the original researchers. This is invalid, because when you subdivide data specifically looking for bits with low p values, you will always find them. It’s a probability game. Not to mention that it violates basic principles of experimental design.

It’s appallingly bad. Even someone like me with only minimal statistical knowledge (and maybe a bit more knowledge about how to properly design an experiment) can see that it’s really an awful paper.

So it got published. Orac wrote a rebuttal that was probably longer than the original paper. Where’s the hilarity in all that?

Vox Day/Theodore Beale got in the act. Not only does he cheer for conclusions for which he has no understanding at all about how they were reached, he he accuses the CDC of fraud and conspiracy, and rejects the entirety of the evidence for the safety of vaccines. We’re in Alex Jones territory here.

Not only does this "reanalysis of CDC data" reopen the possible MMR-autism link, but it calls into question the integrity of the entire field of vaccine research. If Hooker is correct and CDC doctors such as Dr. Colleen Boyle have engaged in vaccine fraud, it will entirely explode the basic assumption that vaccines are safe because it will render all of the CDC’s data and assurances suspect.

via .

Then Day/Beale went into a back-and-forth with Orac on Twitter. In the utterly daft exchange, Flaming Sword Boy accuses Orac of being a mere surgeon, who is scientifically illiterate. Right. Orac is a cancer researcher who publishes in the peer reviewed scientific literature, while Vox Day writes bad fantasy novels and despises women.

But then he drops a bombshell. Vox Day does too have scientific credentials!

No, says the guy whose scientific hypotheses have been turned into multiple published papers and cited by Nature.

Nature has also cited one of my original hypotheses. And it doesn’t erase your basic blunder re statistics.

Wait, what? I did a search; no, neither Vox Day nor Theodore Beale have published anything in Nature, or any other science journal, and they also haven’t been cited anywhere in the scientific literature. Weird. How can he make this claim?

As it turns out, his claim is so tenuous and absurd that you have to laugh.

Here is his ‘hypothesis’, which is his: Religion doesn’t cause wars. He said this in his blog, and he also says it in his self-published ‘I hate atheists’ book, both of which hardly anyone reads, and which aren’t exactly popular with scientists.

However, he now claims that anyone anywhere who even says something vaguely like that (for instance, Scott Atran, who has argued that religion is not the primary causative agent in terrorism), is “citing” him, even if they don’t mention his name or his source, or explicitly acknowledge other sources. It’s all him. It is entirely his idea. It’s not as if people have been making excuses to exonerate religion from all blame for centuries, it was his idea.

This opens up new possibilities for me. My grandmother used to have a collection of my drawings of animals, made when I was four or five, before I learned to read. Therefore my hypothesis has to be entirely original and mine and mine alone. I would draw these animals, a crocodile, an elephant, a cow, mom and dad, and with a purple crayon, I would draw a convoluted squiggle in their heads, which I announced was their brains. Therefore, this is my hypothesis: animals have brains.

My CV is going to get really long as I add every paper ever published in the comparative neuroscience literature. Heck, I’m adding every paper ever published in neuroscience — they were all citing me, even if they didn’t know it. I am obviously the most influential man in the entire history of the science of the brain!

Maybe I should draw the line at every paper that mentions “animals”, though. That would be pretentious and narcissistic and slightly dishonest.

So, what’s your innovative hypothesis that qualifies you as a True Scientist, far more important than some guy with a scalpel and a set of grants and a long list of published papers in prestigious journals? Vox has shown the way. You can all be the greatest minds in science!

Netflix Petitions FCC to Block Comcast Time Warner Cable Merger


Photo: Josh Valcarcel/WIRED

Netflix has been one of the most vocal proponents of net neutrality, and now, the video streaming company is taking its fight straight to the top.

On Tuesday, the company filed a Petition to Deny document to the FCC, asking the government body to block the merger between Comcast and Time Warner Cable, two of the country’s largest internet service providers. It’s the first official appeal of its kind by Netflix, and yet, the lengthy 256-page petition echoes sentiments that Netflix CEO Reed Hastings has been extremely outspoken about since news of a potential merger first broke. It states that such a merger “would set up an ecosystem that calls into question what we to date have taken for granted: that a consumer who pays for connectivity to the Internet will be able to get the content she requests.”

Netflix’s fear, shared by much of the internet community, is that if the two companies were to merge they would have unprecedented power to charge companies like Netflix more for faster service. As Hastings recently told WIRED for a story on this very topic, Netflix has already signed deals with Comcast, Verizon, and AT&T, to ensure better service. And just this month, Netflix signed a similar deal with Time Warner Cable.

Hastings noted that it was only major ISPs charging these so-called interconnection fees, and not their smaller competitors. “Why would more profitable, larger companies charge for connections and capacity that smaller companies provide for free? Because they can,” Hastings wrote. “A combined company that controls over half of US residential Internet connections would have even greater incentive to wield this power.”

For large online video distributors like Netflix, such fees are troublesome, but ultimately, manageable. The greater concern, Hastings told WIRED, is that the proliferation of these fees will completely alienate smaller, leaner start-ups. “The next Netflix won’t stand a chance if the largest US Internet service providers are allowed to merge or demand extra fees from content companies trying to reach their subscribers,” Hastings wrote.

Netflix is far from alone in this fight. On Monday, DISH Network filed its own Petition to Deny to the FCC, citing many of the same concerns as Netflix. “As companies such as DISH innovate and invest to meet the growing consumer appetite for broadband-reliant video products and services, this chokehold over the broadband pipe would stifle future video competition and innovation,” the petition reads, “all to the detriment of consumers.”

Hyperlapse, Instagram’s New App, Is Like a $15,000 Video Setup in Your Hand

Today, Instagram is lifting the veil on Hyperlapse, the company’s first-ever app outside of Instagram itself. Using clever algorithm processing, the app makes it easy to use your phone to create tracking shots and fast, time-lapse videos that look as if they’re shot by Scorsese or Michael Mann. What was once only possible with a Steadicam or a $15,000 tracking rig is now possible on your iPhone, for free. (An Android version is coming, pending some changes to the camera API on Android phones.) And that’s all thanks to some clever engineering and an elegantly pared-down interaction design. The product team shared their story with WIRED.

The Inspirations

By day, Thomas Dimson quietly works on Instagram’s data, trying to understand how people connect and spread content using the service. Like a lot of people working at the company, he’s also a photo and movie geek—and one of his longest-held affections has been for Baraka, an art-house ode to humanity that features epic tracking shots of peoples all across the world. “It was my senior year, and my friend who was an architect said, ‘You have to see it, it will blow you away,’” says Dimson. He wasn’t entirely convinced. The movie, after all, was famous for lacking any narration or plot. But watching the film in his basement, Dimson was awestruck. “Ever since, it’s always been the back of my mind,” he says.

A sample shot from Baraka

A sample shot from Baraka

By 2013, Dimson was at Instagram. That put him back in touch with Alex Karpenko, a friend from Stanford who had sold his start-up to Instgram in 2013. Karpenko and his firm, Luma, had created the first-ever image-stabilization technology for smartphone videos. That was obviously useful to Instagram, and the company quickly deployed it to improve video capture within the app. But Dimson realized that it had far greater creative potential. Karpenko’s technology could be used to shoot videos akin to all those shots in Baraka. “It would have hurt me not to work on this,” says Dimson.

Clever Tech

The insight that powered Karpenko’s algorithms began, like so many other startup ideas, as a phD thesis at Stanford. This was 2010, and the iPhone 4 had come out: one of the first phones that could capture HD video. That sounded terrific, in theory, but cramming such a great video camera onto a handheld device meant that the videos themselves were often shaky to the point of being unwatchable. “They were all just crappy,” Karpenko says.

He knew that image stabilization was the answer, but the technologies of that time, which you’d find in Final Cut and myriad other video editing programs, were simply unworkable for smartphones. Why? Imagine a video clip, taken from a moving car. To even the juddering camera motion, image stabilization algorithms typically analyze a movie frame by frame, identifying image fragments common to each. By recording how those shared points jump around across frames, algorithms can then infer how the camera has been moving. By reverse engineering that motion data, software can recreate a new, steadier version of a film clip. Yet every step in that process requires processing muscle. That’s fine for a movie studio, which has massive computers that crank overnight to re-render a scene. It’s ridiculous for a smartphone.

Left to right: product designer Chris Connolly, and software engineers Thomas Dimson and Alex Karpenko.

Left to right: product designer Chris Connolly, and software engineers Thomas Dimson and Alex Karpenko. Ariel Zambelich/WIRED

Inspired by a demo in which he saw gyroscopes attached to cameras to de-blur their images, Karpenko had an aha moment: Smartphones didn’t have nearly enough power to replicate video-editing software, but they did have built-in gyroscopes. On a smartphone, instead of using power-hungry algorithms to model the camera’s movement, he could measure it directly. And he could funnel those measurements through a simpler algorithm that could map one frame to the next, giving the illusion that the camera was being held steady. He mocked up a simple demo, and filmed a dot on his wall, while making his hand shake. “The images in the test matched up almost exactly, and that’s when I knew this was doable,” Karpenko says.

Surfacing the Company’s Good Ideas

Dimson eventually cajoled Karpenko into ginning up a prototype app that wouldn’t just improve the shakiness in your typical handheld videos, but was robust enough that you could run around with it and still have the camera look like it was still. Usually, such prototypes are gawky, barely functioning things whose possibilities require a dollop of imagination. Dimson says this one was different. “We were blown away by how well it worked,” he says.

The only choices you make in the Hyperlapse UI are the speed of replay and whether to save your video

The only choices you make in the Hyperlapse UI are the speed of replay and whether to save your video Instagram

Eventually the duo uploaded video of the app in action to Instagram’s internal message board, where it received the ultimate blessing: a single comment from Instagram co-founder and CEO, Kevin Systrom. It simply declared, “This is cool.” This, in turn, egged them on to present their project to the wider group, at the company’s first “pitch-a-thon” for new creative tools, held last January. (Many of Instagram’s new features are the result of that meeting, including the sliders that allow you to dial in the strength of each filter.)

Once Dimson had the go-ahead to share a beta of Hyperlapse among Instagram employees, it caught fire—so much so that Dimson began to rue waking up every morning and having to dutifully ‘like’ the hundreds of videos that his coworkers were posting. “Honestly, we’re really surprised this thing didn’t leak out, given how obsessed people were with using it,” says Gabe Madway, a spokesman for the company.

If Hyperlapse is so cool, it makes you wonder why it’s built as a standalone app, rather than a new feature of Instagram. That had to do with the realities of building something really cool, but also fairly hard to explain. The honchos at Instagram figured some users would grok the possibilities immediately and become obsessed with it. But most would ignore it. To built it into Instagram, you’d have to hide it, to keep the core app simple for its millions of users. This would be a double bind for Hyperlapse: Power users would find it annoying to use, if they found it at all, and everyday users would simply never look for it. So they split it off into its own product. “We didn’t want to create a special use that would just be hidden,” says Mike Krieger, Instagram’s co-founder and CTO.

The Best UI Insight: The Only Filter is Speed

The elegantly bare UX belies the complexity of both what’s going on in the app, and what it replaces.

The first screen has only a record button. Once you’re done recording, the only thing you can do is choose what speed you want your video to run—the slider goes from 1x to 12x. Once that’s set, you can share the video directly to Facebook or Instagram. This simplicity came early to the product. Chris Connolly, the designer that Dimson had rallied to his cause, recognized immediately that for all the fussy UI detail, one function mattered above all: replay speed. Fooling with that speed made some videos zippy that could be boring; others comic that could be dull; and others poetic that had once simply been neat-o.

Once you start using the app, you quickly see that replay speed itself becomes a novel, alluring tool: For pets and people, replaying at about 1x gives you the sense that you’re creating a tracking shot like that Copacabana scene in Goodfellas . The higher replay speeds work better for shooting the sky out your airplane window, the scenery scrolling past during a train ride, or anything else that’s moving slowly or at a distance. Where Instagram’s filters are all about changing color and light, Hyperlapse uses a simple speed slider as its main creative decision.

All of those choices must be built-in up front with traditional camera rigs. Usually, capturing even a brief tracking shot requires intricate choreography between where you’ll move with the camera and what your subjects will be doing when you film them. Time-lapse set-ups are even more intense, requiring a camera be set up on a track and programmed to move at a steady speed. Both of those art forms are hardly spontaneous, and spontaneity is supposed to be Instagram’s calling card.

Hyperlapse, by contrast, let’s you create a tracking shot in less than a minute. “This is an app that let’s you be in the moment in a different way,” says Krieger. “We did that by taking a pretty complicated image processing idea, and reducing it to a single slider. That’s super Instagram-y.”

This Special Pedal Measures Your Cycling Power on the (Relative) Cheap



Measuring your heart rate is great, but if you really want to get the most out of your cycling training, power—measured in watts—is the metric you want. Unfortunately, power meters ain’t cheap. Rear power hubs like PowerTap run around $800 (and that’s just for the hub, not the whole wheel), while crank and pedal options, like the Garmin Vector, can cost upwards of $1,500.

Garmin is trying to make things slightly more affordable with a new power meter variant, the Garmin Vector S. It’s basically half of the Garmin Vector, and is able to calculate your overall power output using just a single left pedal. The pedal estimates your total power based on pedal strokes (specifically, pedal deflection as a measure of force) as well as your cadence, and transmits that data to ANT+ devices like Garmin bike computers. Like a single crank system, it works by sampling half of your total power output, since it only tracks one leg, and that number is doubled to estimate your actual output. The included right pedal is just your usual dummy pedal with no smarts built in. The set comes in two sizes, regular (for 12-15 mm thick cranks) and large (for 15-18 mm thick cranks).

For first time power meter users, data from a single pedal can be an adequate way to gauge your overall efforts. But later this year, dual pedal users—that is, the full Garmin Vector set, not owners of the Vector S—will get advanced metrics through a firmware upgrade that lets you compare power when you’re sitting versus standing, and track at which point in your pedal stroke you’re producing the most power.

The full Garmin Vector set up will run you $1,700, but the Vector S pedal is just $900, a decidedly more affordable price tag. A second right-side pedal upgrade can be purchased separately for $700, should you buy the S and then save up dough for the dual pedal system later on. The S will be available Q4 this year.

One of History’s Most Beautiful Cars May Also Be the Most Innovative

Sixty years after its debut, the Mercedes-Benz 300 SL Gullwing remains one of the most beautiful cars ever made. Even when you paint it beige and cover its seats in shriek-inducing red and green plaid, it’s gorgeous. But more importantly—at least in the annals of automotive history—the car was packed with innovative tech like a slanted inline six-cylinder engine, fuel injection, a lightweight frame, and those glorious doors.

Like with many automotive inventions, the 300 SL’s groundbreaking features were born from racing. It all started with the 1952 W 194 series 300 SL, which took first and second place at the 24 Hours of Le Mans; first, second, and third at the 24 Hours of the Nürburgring; and first in the 1,900-mile Carrerra Panamericana race.

Two factors made the SL so successful. For power, the Germans took the engine used in the 300-series sedans and limousines and stuffed it under the SL’s long hood. They slanted the 3-liter inline six-cylinder 50 degrees to the left, which pushed the car’s center of gravity closer to the ground and maintained the low, sleek line of the hood. Chrysler’s famous 225 Slant Six, similarly tilted setup, didn’t debut until 1960.

To make each bit of horsepower more effective, Mercedes engineer Rudolf Uhlenhaut developed a frame of thin tubing that weighed just 110 pounds. That tubular framework is the reason behind the car’s iconic feature, its gullwing doors. Mercedes couldn’t cut into the frame to fit conventional doors without sacrificing stability, so it hinged the doors to swing up instead of out.

Fortunately for people who weren’t paid race car drivers, Mercedes decided to bring a production version of the car to market. That move is credited to Max Hoffman, the official importer of Mercedes-Benz cars to the American market. He knew the car would be a hit here in the States and badgered the brass until they caved. And so the 300 SL Coupe was born, presented at the International Motor Sports Show in New York in February 1954, complete with the tubular space frame and gullwing doors.

The production car had something the 1952 racing version lacked: Fuel injection. Mercedes used the system, developed by Bosch, in the 1953 W194/11 racing car prototype, and the coupe was the first production car to feature it. Switching from carburetors to fuel injection increased fuel efficiency and power. The engine cranked out 215 horsepower, good enough to go from 0 to 60 mph in eight seconds. With a top speed of 161 mph, the 300 SL was the fastest production car of the age, according to RM Auctions.

Between 1954 and 1957, Mercedes made 1,400 Gullwing coupes. The automaker says “the road-going racing coupé became the symbol of success for the rich and the beautiful of its day and age, a dream come true for a few other people and for many a dream they were at least able to see and hear every now and then.” This is one of the rare occasions where the marketing speak isn’t hype. Six decades after the first of those cars hit the road, those that are left routinely sell for six-digit figures. During this year’s Pebble Beach festivities, Rick Cole Auctions sold a 1956 300 SL for $1.6 million, and RM Auctions sent a 1955 model to a new owner for a whopping $2.53 million.

The original SL 300s were a pain to maintain and a bit quirky on the handling side, but they sparked quite the line of offspring. The latest is the 2015 SLS AMG GT, a $220k gullwing-ed beast with a 6.3-liter V8 that rockets the car from 0 to 60 mph in 3.6 seconds. If you’ve got one, hold onto it—it could cross the auction block for a lot more money after another 60 years.