Fantasy Nobel Laureates, 2014 Edition

Could this object - an organic light emitting diode - win its inventors a Nobel prize? (Image: Wikimedia/meharris)

Could this object – an organic light emitting diode – win its inventors a Nobel prize? (Image: Wikimedia/meharris)

With the advent of autumn, a time-honored tradition is ushered into the public consciousness – a chance to draft a team of top-notch talent and see how you stack up against your friends. That’s right, it’s time to assemble your fantasy team.

Your Nobel Laureates fantasy team, that is. The coming season of scientific awards will be recognizing some of the most transformative work from the last several decades, and the team at Thomson Reuters has your cheat sheet. By combing through their Web of Science database, analysts are able to spotlight work and researchers that have been cited with high frequency by other studies over the years. “As imitation is one of the most sincere forms of flattery,” notes Dasil Moftah, Thomson Reuters’ president of IP and Science, “so too are scientific literature citations one of the greatest dividends of a researcher’s intellectual investment.”

It’s scientific populism, the suggestion that citations are proportional to importance, but the method appears to be relatively robust – after all, the Thomson Reuters crew has gotten it right 35 times since 2002. This year, the data pointed to 22 researchers – all men – in the fields of physiology / medicine, physics, and chemistry. And here they are, coming to a fantasy draft board near you:

Physiology or Medicine

James Darnell, Jr (Rockefeller University); Robert G. Roeder (Rockefeller University); Robert Tjian (University of California, Berkeley)

For their work on eukaryotic transcription and gene regulation. The pathway from genetic code to physiological reality is a mysterious road with many potential digressions. In eukaryotic cells, the process is even more complicated than in single-celled prokaryotes, with an array of regulating molecules and feedback loops.

David Julius (University of California San Francisco)

For his studies of the molecular basis for pain. In a quest to determine how molecular interactions interact with nerve endings, Julius and his group have experimented extensively with hot and cold sensations, using capsaicin (the “spicy” ingredient in peppers) and menthol (the cooling component of mint), respectively.

Charles Lee (Jackson Laboratory for Genomic Medicine); Stephen Scherer (University of Toronto); Michael Wigler (Cold Spring Harbor Laboratory)

For their discoveries linking gene copy number variation with certain diseases. Genetics dogma suggests that you inherit one copy of every autosomal gene from each parent, but these researchers pieced together a befuddling puzzle to conclude that this is not always the case. In fact, wide variations in the numbers of gene copies exist at hundreds of sites throughout the human genome, leading to a cascade of effects that may be associated with diseases including breast cancer and spinal muscle atrophy.


Charles Kane (University of Pennsylvania); Laurens Molenkamp (University of Wurzburg); Shoucheng Zhang (Stanford University)

For research on the quantum spin Hall effect and topological insulators. The specialized quantum spin Hall effect is a state of matter in which two electrons’ magnetic field and spin orientations are coupled. Kane, Molenkamp, and Zhang established much of the theoretical framework for the effect, while ushering in a more applied exhibition of the phenomenon based on semiconductor physics.

James Scott (University of Cambridge); Ramamoorthy Ramesh (University of California Berkeley); Yoshinori Tokura (University of Tokyo)

For their contributions to ferroelectric memory devices and multiferroic materials. Flash memory plays a key role in many of our technological devices, but ferroelectric-based technologies may ultimately prove to be preferable for certain applications. Using an iron-based layer rather than a dielectric one, these materials require less power, and process information faster, and can withstand many more cycles of writing and erasing data.

Peidong Yang (Lawrence Berkeley National Laboratory)

For his work on nanowire photonics. Manipulating optical energy is a critical capability for computers and communications tools; doing so with devices smaller than the wavelength of the light you’re trying to alter is a promising but extremely challenging corollary. Yang and his team have made progress with miniscule components called “nanoribbons” that can guide light despite the unwieldy scale differential.


Charles Kresge (Saudi Aramco); Ryong Ryoo (Korea Advanced Institute of Science and Technology); Galen Stucky (University of California Santa Barbara)

For the design of functional mesoporous materials. Mesoporous objects have pores between 2 and 50 nanometers wide. These parameters are proving extremely useful in chemical and alternative energy industries for directing reactions that require even dispersion and particular surface area-to-volume ratios

Graeme Moad (Commonwealth Scientific and Industrial Research Organization, CSIRO); Ezio Rizzardo (CSIRO); San Thang (CSIRO)

For their development of the reversible addition-fragmentation chain transfer (RAFT) polymerization process. RAFT polymerization controls the otherwise rapid and chaotic process of free radical reactions, using a certain class of intermediary molecule (thiocarbonylthio compounds, if you must know) and reaction conditions in a reversible process. This approach is able to accommodate a wide range of precursor molecules – styrenes, acrylamides, acrylates – and can generate several different macro-scale architectures, making it one of the most versatile and valuable modes for industrial polymerization techniques.

Ching Tang (University of Rochester / Hong Kong University of Science and Technology); Steven Van Slyke (Kateeva)

For inventing the organic light emitting diode (OLED). OLEDs are comprised of a light-emitting organic compound layer sandwiched between two electrodes, one of which is typically transparent. These optical and electrical properties allow for many of today’s digital displays, such as computer screens and mobile phones.

Regulatory thermometer that controls cholera discovered by microbiologists

Karl Klose, professor of biology and a researcher in UTSA's South Texas Center for Emerging Infectious Diseases, has teamed up with researchers at Ruhr University in Bochum, Germany to understand how humans get infected with cholera, Their findings were released this week in an article published by the Proceedings of the National Academy of Sciences.

Cholera is an acute infection caused by ingestion of food or water that is contaminated with the bacterium Vibrio cholerae. An estimated three to five million cases are reported annually and 100,000-120,000 people die from cholera infections every year. Cholera patients suffer from dramatic fluid loss and can lose up to 40 liters of fluid from their body in just a few days.

Klose and his collaborators discovered that Vibrio cholerae, which normally lives in oceans and rivers, senses a shift in temperature as it enters the human body through a mechanism called a Ribonucleic Acid (RNA) thermometer. The thermometer detects the higher body temperature of 98.6 degrees Fahrenheit, and then turns on the virulence factors that lead to cholera. Klose's laboratory showed that interfering with the thermometer prevents the bacteria from causing disease, suggesting possible therapeutic outcomes from this research.

The research collaboration stemmed from a long friendship between Klose and Franz Narberhaus, who trained as microbiologists together at U.C. Berkeley 25 years ago.

"The temperature shift is one of the signals that the bacterium uses to turn on the virulence factors, such as the cholera toxin, that cause the disease," said Klose. "We have shown that the bacteria's thermometer controls temperature-dependent expression of the virulence factors. They only express them when they are at body temperature and not at ocean temperature."

"We found that if the RNA thermometer is prevented from working correctly and detecting the right temperature, the bacteria won't cause any disease at all. The organisms will just pass right through the body," explained Klose. "If you can figure out how to disrupt the RNA thermometer in some manner, then you may have a therapy against this disease."

Klose says the long-term goal is to come up with intervention strategies against cholera. Understanding exactly how the bacterium controls the expression of the virulence factors is one step forward in trying to intervene during cholera epidemics.

Story Source:

The above story is based on materials provided by University of Texas at San Antonio . Note: Materials may be edited for content and length.

Drones Get OK from Feds to Shoot Movies in US Airspace

The Parrot AR.Drone will now be available to terrorize offices and parents of Android users. Photo courtesy of Parrot.

A Parrot drone. Image: Parrot

The federal government has taken a step toward opening US skies to commercial use of drones, allowing several movie production companies to use flying bots on their shoots.

On Thursday, the Federal Aviation Administration announced that six companies have been granted exemptions from current regulations that, the agency says, ban most commercial use of unmanned aerial vehicles.

In a conference call with reporters, FAA Administrator Michael Huerta called the exemption process a roadmap for other applicants seeking to use drones for other purposes, from crop inspections to pipeline patrols. “This process opens up a whole new avenue,” Huerta said. The approved exemptions represent the first of 40 petitions under consideration by the FAA.

The exemptions allow the companies to use the drones for any kind of video production, but only under strict conditions. Drone operators must have pilots’ licenses, and the drones cannot leave the pilots’ line-of-sight while airborne. The drones cannot go higher than 400 feet, and they can only fly over closed sets. What’s more, filmmakers must still apply to the FAA for approval to operate in specific areas.

Safety Struggle

The restrictions reflect the long struggle to reconcile the many potential uses of inexpensive unmanned aircraft with concerns over safety and privacy. Determining how to integrate drones with the national air traffic control system is also proving complicated. The FAA is under congressional mandate to craft rules for the civilian use of drones in US airspace by September 2015, though an audit predicted the agency is likely to miss that deadline.

In the short term, these difficulties mean that one of the most powerful features of drones—the ability to pilot themselves over long distances without human observation or control—won’t be available to commercial users. “We’re a long way from having autonomy in civil airspace,” said attorney Greg Cerillo, a partner at Wiley Rein, who heads the Washington, D.C., law firm’s aviation practice.

Safety concerns in particular also mean drones are less likely to be approved anytime soon for uses that would require them to fly over populated areas, such as delivery or news-gathering. “For the most part, you’re dealing with image capture, heat sensing, chemical sensing. You’re not dealing with moving objects from point A to point B yet,” Cerillo said.

More Drones, More Jobs

Still, some in the drone industry are hailing the FAA’s decision as a move in the right direction. “I am happy for the news and expect to see many more of these granted,” said Jesse Kallman, head of regulatory affairs at Airware, a San Francisco startup that makes software for commercial drones.

The exemptions will also mean more domestic business for the motion picture industry, said former U.S. Sen. Chris Dodd, now chairman and CEO of the Motion Picture Association of America, which worked with the FAA to craft the new rules.

Productions ranging from Skyfall to The Smurfs had to go outside the U.S. to shoot in countries where drone regulations were more lax. “The decision today allows more of that production now to occur in the United States,” he said, adding that drones would open up creative doors for filmmakers. “This is great news for all of us.”

Eyeless Mexican cavefish lost metabolic circadian rhythm [Life Lines]

Image of eyeless Mexican tetra fish from by H-J Chen.

Image of eyeless Mexican tetra fish from by H-J Chen.

The metabolism of most animals follows a circadian rhythm that differs between the day and night. Mexican cavefish living in constant darkness, lost this circadian rhythm some time ago. In a newly published study in PLOS ONE, researchers compared the metabolic rate of both cave- and surface-dwelling Mexican tetra fish (Astyanax mexicanus). They hypothesized that since the fish living in each location naturally experience differences in food, predation as well as exposure to daily light fluctuations, they might also have different metabolic needs. When they exposed surface-dwelling fish to constant darkness, the circadian rhythm in terms of metabolism was consistent with oxygen demands normally seen in the daytime resulting in the expenditure of 38% more energy than the cave-dwelling fish. In contrast, the eyeless cave-dwelling fish do not have a metabolic circadian rhythm which the researchers say conserves 27% of their energy in comparison to surface fish. In a quote published in Discovery News, lead study author Dr. Damian Moran said, “These cave fish are living in an environment without light, without the circadian presence of food or predators, they’ve got nothing to get ready for, so it looks like they’ve just chopped away this increase in anticipation for the day.” The study authors suggest that conserving energy is important for life in a cave where food may be scarce.


Moran D, Softley R, Warrant EJ. Eyeless Mexican Cavefish Save Energy by Eliminating the Circadian Rhythm in Metabolism. PLOS ONE. September 24, 2014. DOI: 10.1371/journal.pone.0107877

Discovery News

Hackers Are Already Using the Shellshock Bug to Create DDoS Botnets


Paul M. Gerhardt

With a bug as dangerous as the “shellshock” security vulnerability discovered yesterday, it takes less than 24 hours to go from proof-of-concept to pandemic.

As of Thursday, multiple attacks were already taking advantage of that vulnerability, a long-standing but undiscovered bug in the Linux and Mac tool Bash that makes it possible for hackers to trick Web servers into running any commands that follow a carefully crafted series of characters in an HTTP request. The shellshock attacks are being used to infect thousands of machines with malware designed to make them part of a botnet of computers that obey hackers’ commands. And in at least one case the hijacked machines are already launching distributed denial of service attacks that flood victims with junk traffic, according to security researchers.

The attack is simple enough that it allows even unskilled hackers to easily piece together existing code to take control of target machines, says Chris Wysopal, chief technology officer for the web security firm Veracode. “People are pulling out their old bot kit command and control software, and they can plug it right in with this new vulnerability,” he says. “There’s not a lot of development time here. People were compromising machines within an hour of yesterday’s announcement.”

Wysopal points to attackers who are using a shellshock exploit to install a simple Perl program found on the open source code site GitHub. With that program in place, a command and control server can send orders to the infected target using the instant messaging protocol IRC, telling it to scan other networked computers or flood them with attack traffic. “You install it on the server that you’re able to get remote command execution on and now you can control that machine,” says Wysopal.

The hackers behind another widespread exploit using the Bash bug didn’t even bother to write their own attack program. Instead, they rewrote a proof-of-concept script created by security researcher Robert David Graham Wednesday that was designed to measure the extent of the problem. Instead of merely causing infected machines to send back a “ping” as in Graham’s script, however, the hackers’ rewrite instead installed malware that gave them a backdoor into victim machines. The exploit code politely includes a comment that reads “Thanks-Rob.”

The “Thanks-Rob” attack is more than a demonstration. The compromised machines are lobbing distributed denial of service attacks at three targets so far, according to researchers at Kaspersky Labs, though they haven’t yet identified those targets. The researchers at the Russian antivirus firm say they used a “honeypot” machine to examine the malware, locate its command and control server and intercept the DDoS commands it’s sending, but haven’t determined how many computers have already been infected.

Based on his own scanning before his tool’s code was repurposed by hackers, Graham estimates that thousands of machines have been caught up in the botnet. But millions may be vulnerable, he says. And the malware being installed on the target machines allows itself to be updated from a command and control server, so that it could be changed to scan for and infect other vulnerable machines, spreading far faster. Many in the security community fear that sort of “worm” is the inevitable result of the shellshock bug. “This is not simply a DDoS trojan,” says Kaspersky researcher Roel Schouwenberg. “It’s a backdoor, and you can definitely turn it into a worm.”

The only thing preventing hackers from creating that worm, says Schouwenberg, may be their desire to keep their attacks below the radar—too large of a botnet might attract unwanted attention from the security community and law enforcement. “Attackers don’t always want to make these things into worms, because the spread becomes uncontrollable,” says Schouwenberg. “It generally makes more sense to ration this thing out rather than use it to melt the internet.”

The Bash bug, first discovered by security researcher St├ęphane Chazelas and revealed Wednesday in an alert from the US Computer Emergency Readiness Team (CERT), still doesn’t have a fully working patch. On Thursday Linux software maker Red Hat warned that a patch initially released along with CERT’s alert can be circumvented.

But Kaspersky’s Schouwenberg recommended that server administrators still implement the existing patch; While it’s not a complete cure for the shellshock problem, he says it does block the exploits he’s seen so far.

In the meantime, the security community is still bracing for the shellshock exploit to evolve into a fully self-replicating worm that would increase the volume of its infections exponentially. Veracode’s Chris Wysopal says it’s only a matter of time. “There’s no reason someone couldn’t modify this to scan for more bash bug servers and install itself,” Wysopal says. “That’s definitely going to happen.”

Amazon Takes A Big Step Toward Competing Directly with UPS

Packages slide down chutes, separated by delivery method.

Packages slide down chutes, separated by delivery method, at an Amazon fulfillment center in Phoenix. Ariel Zambelich/WIRED

This past Christmas, when delivery delays left many Santas empty-handed, Amazon’s biggest vulnerability was exposed: it doesn’t control the last, most important step.

Amazon is brilliant at getting orders out its own doors, but with rare exceptions, it relies on third parties—mainly UPS and FedEx—to get those orders onto the doorsteps of customers. Over the past year, however, Amazon has been quietly building out its own capacity to do more of what the big carriers do—a logistics upgrade that takes the company one step closer to taking over the entire delivery process for itself.

In a blog post on Thursday, ChannelAdvisor CEO Scot Wingo describes Amazon’s new “sortation centers” that funnel packages into zones based on zip code. In the past, Amazon simply loaded all orders onto various trucks operated by various carriers and left the more specific geographic sorting to them. By doing that sorting itself, Amazon is removing a step in the logistics chain, clearly in hope of streamlining the delivery process to avoid another holiday snafu this year.

“With Amazon’s growth into the sortation process in the US, they clearly are trying to further own the fulfillment process,” Wingo says. In the process, if they add their own fleet of trucks, they could wind up competing directly with UPS, FedEx, and the U.S. Postal Service

Amazon's North American fulfillment center network.

Amazon’s North American fulfillment center network. ChannelAdvisor

Wingo’s company makes software for the millions of third-party merchants who sell their goods on Amazon. Many of them also store their merchandise at Amazon warehouses and rely on Amazon to ship orders to customers. This dependence makes them avid consumers of intelligence on Amazon’s fulfillment operations, which tend to run below the radar despite the million-square-foot warehouses that anchor the network. While delivery drones capture all the attention, Amazon’s on-the-ground business keeps expanding at an astonishing scale.

By Wingo’s count, Amazon is on its way to 90 fulfillment centers across the US, and 155 total globally. This gives Amazon by far the largest direct-to-consumer fulfillment operation in retail, even compared to Walmart, whose logistics network is designed mainly to ship items to stores. As Amazon has built out this network, it has also embarked on a slow rollout of its own grocery delivery service, using its own branded trucks, that delivers not just food but a wide selection of consumer merchandise the same day. The more fulfillment and sortation centers Amazon builds—Wingo points to an offhand remark on a recent Amazon earnings call that the company had 15 in the works—the more capacity Amazon has to support those trucks making deliveries for Amazon by Amazon.

At a company where customer worship is the corporate religion, it must be monumentally frustrating not to control the only part of the customer experience that involves actual physical contact. However important Amazon is to UPS or FedEx as a customer, it’s still not the only customer. “The challenge is that these carriers are not dedicated exclusively to Amazon,” says Marc Wulfraat, president of MWPVL International, a logitics consulting firm. “These carriers have to service all of their customers at peak season and therein lies the problem … the pain needs to be spread around.”

In the process, some Amazon customers lose out—and at a time of year when late deliveries mean not just inconvenience but teary children under Christmas trees. In Amazon’s warehouses, millions of packages zip around on mazes of conveyor belts, inspiring comparisons to Santa’s workshop. If Amazon wants to win this Christmas, it might also need its own sleighs.

In Forza Horizon 2, Computers Finally Drive as Crazy as Humans


One of the best things about videogames these days is that you can play against your friends, even if they’re not on the same continent as you. With the Forza racing series, Microsoft’s Turn 10 Studios has taken that a step further: Gamers can race against their friends, even when their friends are offline.

Forza Horizon 2, available for Xbox One on September 30, is different from last year’s Forza 5, which aims to offer an exact reproduction of real-world racing. Instead of being limited to tracks, Horizon 2 drivers have access to a huge chunk of southern France and Italy, from Nice to Castelletto (complete with stunning visuals and realistic weather). Even better, unlike the original, Colorado-based Forza Horizon, drivers aren’t limited to roadways. It’s like Grand Theft Auto: You can go nearly anywhere. Forests, hay fields, off-road trails, and pedestrian walkways are all accessible.

The game is great fun, especially for those who don’t have the patience (or skill) for the ultra-realistic simulations of the track-based Forza games. Taking a Ferrari 458 Italia through a field of lavender may be hilariously unrealistic, but that doesn’t make it any less fun, and we suspect siblings will have hours of fun forcing each other into trees at 150 mph.

But not everyone has siblings, or friends who are online and available to play when they are. And racing against a computer opponents programmed to drive a preset course, making every turn perfectly (and identically), is hardly a blast. That’s why Turn 10 created Drivatars, digital players that drive like humans, not automatons. Even better, they drive like specific humans: Every flesh and blood player gets a Drivatar of himself that learns from him and drives like he does. If your big sister isn’t online, you can play the digital version of her instead. (A separate version of Horizon 2, without Drivatars, will also be released on Xbox 360.)

So even if you’re the only human in a race, you’ll have a diverse set of opponents based on your online friends and other players. Some will be aggressive, smashing into other cars and taking every shortcut possible, while others are more likely to jump out of the way if you drive at them, or use extreme caution heading into corners. Sometimes they’ll just make mistakes, as people do, taking a corner too fast and spinning out, or crashing into oncoming traffic. The single-player mode is awfully close to playing with real people, and it’s a delight.


Next-Generation AI

For years, Turn 10 has wanted to use a learning neural network to totally change how racing artificial intelligence works and, potentially, change how videogaming works into the future. The Drivatars use Bayesian learning to develop in the image of their human counterparts: Yours picks up what routes you like to take, whether you drive fast into a corner or brake early, if you bump other cars to get them out of the way or steer clear of opponents.

The goal is to make a computer that drives just like a human. More importantly, the point is to have a variety of driving styles on the track. Dan Greenawalt, creative director at Turn 10 Studios, used the example of an old rivalry in F1 between Michael Schumacher and Juan Pablo Montoya. They had very different driving styles, one very clinical and precise, and the other much looser. This is what the team hopes the Drivatar can create, rather than having a single variety of AI driving around the track in the same way, lap after lap.

Microsoft developed the technology more than a decade ago and put it to limited use in the original Forza game: Players couldn’t see or race each other’s Drivatars. Technology evolved and last year Turn 10 unveiled an entirely new cloud-based Drivatar for Forza 5, proving Drivatars could be much more fun to play against than traditional AI. But the nature of Forza Horizon 2 makes the feature even better.

In Horizon 2, players can drive wherever they want. Stunts like drifting, driving on two wheels, and getting airborne are encouraged. Going off-road is a central focus, while in Forza 5, driving off the track would carry a heavy penalty. So the rules by which the Drivatars are programmed needed to be updated.

“We had to change some definitions for ‘What is a road?’ versus what isn’t,” said Greenawalt. Fortunately, it didn’t take much: “We just turned it on.” Now, Drivatars are imitating their humans, cutting corners and crashing through caf├ęs, anything to get to the finish line fastest. “They can drive through groves and vineyards, drive between trees. We didn’t know what that would be like. We didn’t know what they were going to do.”


Learning How to Drive

The thought that a computer is learning to imitate human behavior may be somewhat terrifying, but in this videogame, it’s sweet. And it’s produced some surprising results. A large part of Forza Horizon 2 involves simply milling about in the game world, without a particular destination. Greenawalt says his team saw Drivatars, without any human interaction, begin acting almost like real people. “You’ll see two Drivatars that will race against each other,” he said. “We didn’t tell it to do that. They learned that by watching players in Horizon do it.”

During the week I spent with the game, I watched a friend’s Drivatar drive around a hay field doing donuts and driving into hay bales. I gave him a heads up that his digital representation was goofing off, and he wasn’t surprised: He’d spent 15 minutes doing exactly that earlier in the day. Another time, I was in second place toward the end of a race and I saw a Drivatar smash into an easy to avoid tree. A traditional AI would never have crashed, but whatever person that Drivatar was based on must have difficulties with arboreal obstacles. It felt like I was driving against real people, who make real mistakes, rather than precise, boring automatons. It was great.


Turn 10 can perform big data analytics on the Drivatars, examining what they’re doing as a population and watching particular behaviors spread. “We didn’t train cars how to block [competing cars from passing them] in studio, they just learned how to block from the population,” Greenawalt said. “It learns the correlation between the behavior and the conditions.”

Unsurprisingly, jerk humans make for jerk Drivatars. Players might turn around and drive the wrong way on a track to mess with their competitors, or smash into cars going around a corner to get an advantage. Drivatars learn to do the same. Turn 10 can change the game code to stop particularly obnoxious behavior, but errs on the laissez-faire side of things. The anything-goes approach is what makes Horizon 2 so fun, Greenawalt said. “We try not to put too many clamps or behavior modifiers on the Drivatars because it takes away from what makes them so unique.”


More Than One Way

It’s easy to see how the Drivatar technology could be applied to other games as well. In a multiplayer first-person shooter like Call of Duty, some players like a run-and-gun strategy, killing anything in their path. Others prefer to sit back and snipe, waiting for the action to come to them. They’re both legitimate ways to play the game, but with very different user behaviors. Having Drivatars (Shootatars?) based on your friends could make gaming much more interesting and could allow gamers to actually trust and work with their computer teammates, rather than simply using them as meat shields.

It could even make gaming more civilized. Human players tend to treat computer controlled opponents like dirt, but in the year since Forza 5 was released, Greenawalt says, the human community as a whole began driving more courteously. Their Drivatars echoed the shift. In online group play, drivers tend to be more polite because there are real people behind the wheels of the other cars. Drivatars have started to close the gap between online group play, where drivers are moderately polite to other humans, and single player mode, where the opponents learn to closely imitate humans. Now, the community as a whole drives much cleaner in single-player, ending up with better racing. “Social engineering intersects with AI and technology,” said Greenawalt.

This Music Video for Groundislava’s “Girl Behind the Glass” Is Actually a Videogame

Groundislava’s new album Frozen Throne is, essentially, about one man’s downfall after he loses the girl he loves in a virtual realm. (It’s also, not surprisingly, an homage to William Gibson’s Neuromancer.) There’s a theme that lends itself to a slam-dunk music video, right? Whip up a concept in the vein of The Matrix or Tron, hit “record,” and ta-da! Instant Vimeo hit. Groundislava—aka Jasper Patterson—took it a little further. He made the video a world you can actually play in.

The playable version of “Girl Behind the Glass” looks like what would happen if Rick Deckard dropped acid and went to Coachella in 2019 to search for an origami unicorn. (OK, in this case it’s a girl, but you get the idea.) As the song streams, players can point-and-click around in the video’s world to see distorted versions of what’s onscreen. There aren’t really points or conquests or anything, but the simple act of exploring its neon headtrip is more engaging than most music videos out there. (Taylor Swift’s “Shake It Off” notwithstanding.) It’s a vibe that permeates the Frozen Throne album and something Patterson wanted to infuse into the song’s video.

“Coming down from a number of different substances, [the protagonist of Frozen Throne] recklessly wanders into the cyberpunk abyss of a futuristic metropolis” before eventually finding his lost love,” Patterson says. “We came up with a sort of roller coaster of scenes and images that exist in a realm somewhere between the virtual and ‘IRL’ worlds within the story, with the interactive component of the ‘video’ placing the viewer in the ‘driver’s seat’ of said roller coaster.”

Created by “experimental director duo” The Great Nordic Sword Fights, the resulting “video”—actually more like an interactive videogame (download it here)—for “Girl Behind the Glass” feels like getting lost in a hyper-color Max Headroom clip. Users maneuver through the experience lead by a Power Glove-like hand and are free to look around the environment as they look for the eponymous girl. “If the mouse button is pressed the image will distort with colorful noises and effects,” Patterson says, “simulating a lapse in the character’s grasp on reality.”

It’s fun, even a little goofy—and its soundtrack is heavenly for anyone who misses Tears for Fears and/or Level 42. It’s also indicative of a mix of immense creativity and irreverence, that must run in the family. Patterson’s father, Mike Patterson, is the guy who did the animation for a-ha’s now-legendary video for “Take On Me.” (He also worked on Paula Abdul’s “Opposites Attract.”) Not that the younger Patterson was looking to necessary create the exact same 1980s magic of his father’s videos.

“I don’t think his work directly inspired this video stylistically, but, in general, growing up around [it] created an unbreakable bond between sight and sound for me,” he says. “It helps when I collaborate with visual artists to create a music video because I enter the process with such concrete visual concepts already in mind.”

Check out a more traditional version of Groundislava’s new music video, which premiered today, above.

This App Fights Acne (It’s Not Nearly as Silly as It Sounds)

An Asynchronous Appointment

Bradford, who left his role as a partner at Kleiner Perkins to start Spruce last year, describes his app as a new way to visit the doctor, though that requires a fairly elastic idea of what constitutes a “visit” with a doctor in the first place. At no point in using the app do you communicate with a dermatologist in real time. Instead, everything happens asynchronously. You create an account. You take a few pictures of your face with your selfie camera. You answer some questions about your skin, and jot down whatever personal questions you might have. Then you send it off.

The whole process takes no more than a few minutes. The app’s bright, cheerful design is reassuring in the same way that a bright, cheerful doctor’s office might be. “Since it’s a new way to see the doctor, you don’t want to make it cold or too sci-fi,” Bradford says.

Ray Bradford

Ray Bradford Spruce

Within 24 hours, a certified dermatologist in your state sends you a personalized treatment plan, with the appropriate prescriptions filed digitally to your pharmacy of choice. (To start, Spruce will be available to patients in California, New York, Florida, and Pennsylvania.) By keeping overhead costs for doctors low, Spruce is able to offer a flat rate for the service: $40.

A Better Visit

Bradford says this asynchronous design makes life easier for all parties involved. As a patient, you can start to take care of your skin problem in a few minutes, instead of scheduling an appointment and waiting a few weeks. Currently, Bradford says, only twenty percent or so of people who have acne bother seeing a dermatologist at all. An app like Spruce vastly reduces the activation energy required to get treatment.

For doctors, who use an accompanying app for treatment, Spruce offers the freedom to set their own schedule. They can take care of their new patients from an iPad, anywhere and anytime, with far less administrative tedium than an in-person visit.

Bradford thinks that there are plenty of places where specific, thoughtfully-designed solutions will be able to improve the status quo.

The design process, which lasted a year in all, focused on the doctor’s experience as much as patient’s. In talking to doctors, Spruce heard again and again that existing electronic medical record software was too cumbersome. “They talk about ‘death by a thousand clicks,’” says Megs Fulton, the company’s lead designer, who joined from Facebook.

To combat this, Fulton and company consciously tried to minimize the rote input required from doctors, leaving them free to spend their time on the more personalized aspects of the treatment. “We feel like that goes a long ways toward making things personal,” Fulton says.

Expediency and flexibility are two benefits of treatment-by-app, but Bradford thinks the approach could offer a more comprehensive experience in a few small but significant ways. For one thing, Spruce becomes a sort of clearinghouse for all things related to your acne treatment. It has information about whatever creams, ointments, or pills you were prescribed, and detailed instructions for using them. It has a sort of FAQ, which Spruce created with the help of dermatologists, where patients can find recommendations for approved moisturizers and sunscreens, and get the straight dope on topics like how diet effects skin.

The Spruce iPad app for doctors.

The Spruce iPad app for doctors. Spruce

Spruce also makes it easy to check back in after the initial “visit”. In addition to getting assigned a dermatologist, the app links patients with a “care coordinator” who serves as a sort of nurse and front desk person for the whole experience. You can message this skin care concierge in the app—say if you’re having trouble getting your insurance sorted out, or if you don’t quite remember which order you’re supposed to apply your creams. This gets closer to the sort of thing we typically think of as the benefit to telemedicine: instant, lightweight access to a real person who can address small questions and concerns.

Beyond Acne?

Spruce does a remarkable job of taking something that could easily feel weird and making it feel very normal. The app is just as intuitive as Lyft or Yelp, except that instead of getting a ride or a restaurant you end up with a little tub of prescription-strength Benzoyl peroxide. In that sense, it really does seem like some sort of step toward a whizbang health care future.

The question, of course, is if the approach makes sense anywhere else. After all, acne is just about the only thing you can diagnose from a selfie.

Bradford’s firm that the vision for Spruce goes beyond pimples. He thinks that there are plenty of places where specific, thoughtfully-designed solutions will be able to improve the status quo. Acne might well be uniquely suited for remote treatment, but future Spruce efforts could ease health care pain points in other ways. “We see technology complementing the doctor, not competing with or replacing the doctor,” he says. “It’s how you let doctors do what they do best—which is using their judgment, and caring for patients—as opposed to repetitive, rote things, or administrative paperwork.”

Bradford won’t divulge what other conditions Spruce is thinking about tackling just yet. But he does point out that as gadgets like the Apple Watch, with its sophisticated health-related sensors, flood the mainstream, the opportunities to transform health care will only expand. Technology moves fast, and while health care moves slow, Bradford’s convinced that the latter will inevitably catch up with the former.

A Numerical Calculation of the Electric Field Due to a Charge Distribution

It’s time for another physics example. In this case, I am going to calculate the electric field due to an electric charged rod. Of course you could do this analytically using a bit of calculus. This is a fairly standard example in most introductory physics textbooks. Here is an example where I calculate the electric field along the same axis as the rod.

But what if you want to find the electric field at any point? For instance, like this:

Sketches Fall 14 key

You can set up an integral to determine electric field at that point, but it won’t be easy to evaluate. But the cool thing is that both the analytical and numerical methods in this case use the same idea. In both cases, you will break the charged rod into a whole bunch of tiny pieces. The electric field due to each of these tiny pieces is just like the electric field due to a point charge (if the pieces are small enough). Then the total electric field at the point of interest is just the same of the tiny electric fields due to the tiny pieces of the rod. Really, the only difference is that in the analytical method you take the limit as the piece size approaches zero.

Ok, let’s set up a numerical method for calculating the electric field due to the rod. Here is the recipe.

  • Break the rod into N pieces (where you can change the value of N).

  • For each tiny little piece, calculate the charge and the position. The charge of each piece would just be Q/N.

  • Find the vector that goes from each piece of the rod to the point where you want to find the electric field.

  • Use the equation for the electric field to find the contribution to the total electric field due to each piece.

  • Add up all the contributions to the electric field due to all the pieces.

That’s it. It’s really not too complicated. In fact, you don’t even need a computer to do this. If you preak the rod into 10 pieces, you could easily calculate the field due to each of these 10 pieces. Of course if you want to break it into 100 pieces, the calculations still might not be difficult, but the process might drive you insane.

Before getting into the program, let’s say that I want to find the electric field at some vector location ro. Here is how you would calculate the electric field due to one of the pieces.

Sketches Fall 14 key

Now for the program. Wait. I’m not going to show this part to you. I know, that sort of stinks – but that’s the way things are going to be. There are probably many introductory physics classes that use this problem as part of a homework assignment or something. I don’t want to spoil the solution. Sorry. However, I will show you what it looks like.


Yes. That looks very pretty, but it’s not that useful. In order to determine the accuracy of this numerical model, I need to calculate the electric field along an axis perpendicular to the rod and in the center of the rod. This is a region that I can also calculate the electric field using calculus such that I can see how well the two methods agree.

Skipping the derivation, I have two expressions for the magnitude of the electric field along an axis perpendicular to the center of the rod. The second formula is an approximation if the length of the rod is long compared to the distance from the rod.

La te xi t 1

Ok, let’s get to a calculation. I want to plot the magnitude of the electric field as a distance from the rod for all three methods (the two equations and the numerical method). Here are my starting parameters.

  • Rod length = 0.5 meters.

  • Total charge = 1 x 10-8 Coulombs.

  • Number of pieces (for the numerical calculation) = 100.

Here is the plot. The horizontal axis is the ratio of the distance to the rod divided by the length of the rod.

Here you can see that there is clearly a difference between the approximation and the other two methods of calculating the electric field. This is especially true as the observation point gets further away from the rod and the approximation that z is much smaller than L is obviously not true.

Now that this method seems to be working, let’s test the numerical model. How dependent is the solution on the number of pieces that the rod is broken into? This is a plot of the magnitude of the electric field in the middle of the rod at a distance of 0.1L.

Why is it all zig-zaggy? My original guess was that it had to do with whether the rod was broken into an even or odd number of pieces. Looking at that data more closely, this is not the case. Perhaps it’s some sort of rounding error. I’m not sure.

So, how many pieces should you break the rod into? Obviously more is better. In this case even breaking the rod into 1000 pieces doesn’t take any significant calculation time and it gives a fairly reasonable answer. Of course for other situations, the calculation time could be important. You would have to pick some balance between fast-cheap-and accurate.

In the calculation above, it seems like the analytical solution is superior in every way. But wait! It’s not. The analytical solution only works on that line that runs perpendicular to the rod and through the middle of the rod. So let’s do something that the analytical solution can’t do. What if I want to calculate the value of the electric field along a line at some angle. Here is a diagram.

Sketches Fall 14 key

Here is a plot of the electric field along the line y = x. Actually, I will plot the component of the electric field in the direction of the line (instead of the magnitude of the electric field).

Ok, that’s cool – but how do I know if it is legit? Well, there is one trick I can use. What if I get really far away from this rod? In that case, the electric field should be similar to the electric field due to a point charge. At large distances, a rod just looks like a point.

Here is a plot of the component of the electric field along a diagonal for large distances along with the calculation of the field due to a point charge.

That’s nice. Actually, I am sort of surprised that the two electric fields are so close even at a distance of just L away from a rod of length L.

But there you go. That’s the electric field due to a charged rod. There would only be one thing that would make this whole process better – experimental data for the electric field due to a rod. That would be pretty tough. It’s difficult to create a uniformly charged electric rod and even harder to measure the electric field at different points in space.

What if you did a similar calculation for the magnetic field due to a straight wire with current or even the magnetic field due to a loop of wire? The nice thing about the magnetic field is that you could also experimentally measure the magnetic field. Wouldn’t that be cool? Why don’t you do that for homework?

The WIRED iPhone 6 Review: Bigger, Better…and a Little Buggy

For the first time, the big question for Apple fans isn’t “are you getting a new iPhone?” It’s which iPhone are you getting. A 6 or the ginormous 6 Plus?

Yes, the 5 came in two flavors too, but let’s be real—no one debated whether to get a 5c or a 5s at launch last year. If you could get a 5S, you did. These new phones arrived in a pair as well, but each one has its own target audience. I have tiny hands and a tendency to stash my phone in my pocket, so I got the 6. I handed the 6 Plus to my colleague Mat, whose review you can read here.

Now, I test gadgets all the time, and I don’t typically find unboxings exciting. But there was a sense of anticipation opening the iPhone 6. This thing is gorgeous, all sleek glass and cool (and, alas, malleable) aluminum. It looks positively space age, a vision of the future glimpsed in an episode of Star Trek. Every iPhone until now had a monolithic quality; this one is softer, yet no less impressive. There’s only a whisp of a bezel; the brushed aluminum back wraps around a softly curved edge to meet the glass front. And it is so slender, a mere 6.9 mm thick–that’s about eight stacked credit cards if you’re wondering. Go ahead and wear your skinniest skinny jeans; this thing will fit in the pocket no problem.

It’s almost comically light, too, at just 4.55 ounces. Yes, it’s slightly heavier than the 5s, but even with a substantially bigger display, it still feels like it might blow away in a stiff breeze. That screen, by the way, is the best you’ll find on any similar-sized smartphone. With a resolution of 1,334 x 750, its 326 ppi density (a higher resolution update to Apple’s famed Retina Display) renders pixels invisible to the naked eye. But it’s not just the sharpness that impresses. The colors are so rich and bright as to be almost surreal, and images appear to sit more closely under the display’s cover glass, more like an OLED Windows Phone display. Responsiveness is snappier than the 5s, too, with better receptivity at the edges of the display. Everything happens almost instantaneously.

The camera is no less impressive. Yes, that sapphire lens protrudes just enough to all but demand getting scratched, but it is among the very best installed on a smartphone. The 8-megapixel shooter uses a new sensor and a trick Apple calls “Focus Pixels” to boost the speed and precision of the camera’s autofocus. It’s noticeably better than the camera in the 5s. Landscape shots snapped from a moving car were so sharp you’d think they were taken while standing still. Photos taken in dim light were grainless, colorful, and sharp—even without the flash. White balance is excellent, too. The 6 has a camera so good that using Instgram filters borders on sacrilege.

Video quality gets a similar upgrade, with the ability to shoot at 1080p in either 30 or 60 fps. The image stabilization is rock-steady; one of WIRED’s photographers thought a video I shot freehand was done on a tripod. And the continual autofocus, color, and clarity made things so clear that some shots look almost like architectural renderings. Things get grainier in low light, but much less so than on the 5s. It’s a good thing Apple upped the capacity on the mid and upper storage configurations to 64 and 128 GB—you’re going to need it for all this media.

Time Lapse, iOS 8’s new camera function is gimmicky but fun. It snaps photos at certain intervals while you leave your phone pointed at a certain scene or subject. Similarly, being able to record slo-mo at 240 fps is spectacular, letting you to examine exactly how water splashes in a fountain, or how your face skin wobbles as you shake your head around. It’s an entertaining and enlightening way to view aspects of our world.

In my testing, the iPhone 6’s battery life is just as good—perhaps slightly better—than its predecessor. Although it has a larger display, it also houses a larger 1810 mAh battery to balance things out. With ample camera use, social media-ing, some gaming, and other random things you might use your phone for, the phone easily made it through the day with the screen brightness set just under 50 percent. Higher screen brightness, heavy gaming, or excessive downloading will take a greater toll, but I could usually finish the day with 20 to 30 percent battery life left.

The new A8 processor renders graphics-intensive games like a champ, and keeps the entire experience pleasantly speedy. The big difference over the iPhone 5s’ A7 processor is efficiency, but it also manages to make things like app loading noticeably snappier. Its co-processing partner, the M8, uses the compass, accelerometer, a new barometer, and gyroscope to track your steps, flights of stairs climbed, and other movement stats. It does all this in the background and without any major battery impact; in fact, you’d never know it was passively tracking your activity if you didn’t check the Health app.

If you’re among the two or three people still using your phone to make phone calls, you’ll be happy to know audio quality is exceedingly clear. T-Mobile users get it even better with Voice over LTE (VoLTE) and the ability to make calls over Wi-Fi, not just the cellular network. You can expect other carriers to follow suit eventually. Audio out of the iPhone 6’s six-hole speaker grille can easily go to 11 and beyond, more than enough dBs for anyone who isn’t already half deaf.

The excellence of the hardware is, alas, undermined by some problems with the software. Several time while opening Messages or other keyboard-dependent apps, the keyboard was MIA. It simply wouldn’t appear. This may or may not have had something to do with the installation of third party keyboards. I’d either need to double tap the home button and swipe the app away, or put the phone to sleep and wake it back up for the keyboard to appear. I also had trouble swiping up from the lockscreen to access the camera, an irritating problem that caused me to miss multiple fleeting photo opportunities. But software is something we know can, and generally will, be improved with inevitable updates to Apple’s mobile OS (although iOS 8.0.1 brought its own batch of issues).

At this point, the iPhone 6 is a handsome, powerful handset with a phenomenal display and an insanely good camera. TouchID now works flawlessly, and iOS 8 brings loads of useful new features to the platform. It is easily one of the best phones out there. But the software still buggy, occasionally unacceptably so. Part of that is to be expected: While iOS 7 was largely aesthetic, iOS 8′s core mission is enhancing usability, and there are a lot of thoughtful upgrades to the system: Things as small as being able to mute members of an incessant texting thread, and as large as enabling your iPhone to become the hub of all your health tracking apps (through Health and HealthKit) and home automation services (through HomeKit).

The iPhone 6 feels like an Olympic gymnastics routine: It’s a big leap, and filled with brilliant feats, but Apple didn’t quite stick the landing.