Resistance to antibiotics: New rapid diagnosis

A rapid diagnostic test for multi-resistance to broad-spectrum antibiotics has just been developed at the University of Fribourg. Prof Patrice Nordmann and Dr Laurent Poirel of the Medical and Molecular Microbiology Unit have been collaborating with Unit 914 of the National Institute of Health and Medical Research (INSERM) in Paris, of which Patrice Nordmann is also Director. This new test allows the identification, in less than two hours, of multidrug-resistant strains of Acinetobacter baumannii, an important hospital pathogen. The large-scale application of this test will mean better control of the spread of certain traits of antibiotic resistance.



Bacterial resistance to antibiotics has increased considerably over recent years. The situation is particularly dramatic in regard to gram-negative bacilli (Escherichia coli and Klebsiella pneumoniae, Pseudomonas aeruginosa and Acinetobacter baumannii), in, for example, septicemic and abdominal infections and infections of the urinary tract and the lungs, considered to be the most frequent human infections in 2014. There are already signs of a real therapeutic impasse. Extremely broad-spectrum antibiotics, such as the broad-spectrum cephalosporins and the carbapenems, antibiotics of last resort, are already proving totally ineffective against certain strains of bacteria. It is estimated that in Europe the total number of deaths associated with multi-resistance to antibiotics is 25,000 annually. The rapid development of this resistance risks compromising whole areas of 21st century medicine which require effective preventative or curative antibiotics for transplants and major surgery as well as resuscitation.


Rapid diagnosis: a crucial factor


When a bacterium hydrolyses an antibiotic, it deactivates the way it works in some way. It is this phenomenon which had already been targeted by two rapid diagnosis tests developed by Patrice Nordmann and Laurent Poirel. These tests detected the presence of extended spectrum Beta lactamase enzymes and of carbapenemases (which hydrolyse wide-spectrum cephalosporins and carbapenems in Enterobacteriaceae and Pseudomonas aeruginosa, respectively). Now the two researchers have developed the CarbAcineto NP test which allows the detection of carbapenemase activity in A. baumannii; and it is this carbapenemase activity which is systematically associated with multi-resistance to antibiotics in this type of bacterium (fig. 1). The test is based on the acidification properties generated by the enzymatic hydrolysis of a carbapenem, Imipenem, when it is cleaved by a carbapenemase. The medium acidifies and the acidity (pH) indicator then turns from red to yellow. The detection of this carbapenemase activity can be realised by testing already isolated bacteria or any site infections. The result is obtained in less than 2 hours, while other techniques currently available require a minimum of 24 hours, most frequently 72 hours. The sensitivity and specificity of the CarbAcineto NP test is close to 100%, a value rarely achieved by a diagnostic test in medicine.


The development of the CarbAcineto test is an important contribution to the struggle against the emergence of antibiotic resistance. It is simple, cost-effective and, by detecting multidrug-resistant strains, it can prevent them spreading via outbreaks of hospital infections caused by multidrug-resistant bacteria, particularly among the most seriously ill patients -- those undergoing resuscitation. This new test also provides a guide in the choice between the very few remaining treatment options for infected patients.




Story Source:


The above story is based on materials provided by Fribourg, Université de . Note: Materials may be edited for content and length.



Salesforce and Philips Connect Doctors to Your Fitness Tracker


doctors-internet

Getty



Apple has its HealthKit, Google its Google Fit; and now Salesforce and Philips are getting into the game as well with a cloud-based platform could help doctors track data from a multitude of devices.


The two companies want to extend the Salesforce1 platform so that developers can write new apps that take data from different sources — MRI scanners or heart monitors, for example — and integrate it in a secure way while complying with privacy laws. Philips has already used the new platform to build its first two apps, Jeroen Tas, head of Philips Healthcare Informatic Solutions group, said today in a press conference.


Healthcare is one of the most promising fields for wearable computing and the Internet of Things. Today activity monitors like the Fitbit are mostly marketed to fitness nuts, but soon these types of devices could chronicling ill patients keep tabs on their health and gather vital data. And connecting these devices to the web could enable healthcare providers to respond to issues more quickly.


But making this work will require more than just a new breed of medical devices. Doctors and patients will also need ways to collect and analyze all of this new data.


The new apps are a data management hub for patients, called Philips eCareCoordinator, and eCareCompanion, a tool that lets healthcare providers view data from hundreds or even thousands of patients from a single dashboard.


Using eCareCoordinator, patients will collect data from connected devices, such as weight scale that sends information to the cloud, or pill sorters that track whether a patients has actually taken their pills. Patients can keep track of their own progress in the app, and even grant access to their family members. Doctors and nurses can check-up on their patients from eCareCompanion. If something goes wrong, for example a patient stops taking their pills, the provider will be alerted in the eCareCompanion dashboard.


The apps will be released later this summer, and will be piloted by Banner Health, a chain of hospitals and specialized healthcare facilities.


Although Philips is the first company to build software using the new platform, Tas says that it will be open to any company or developer. The business model for the apps, and the platform, is still unclear but Tas indicated that the apps would likely be paid for by patients, not insurance companies. But believes that the software will lower overall costs of healthcare, ultimately saving patients money.


Ultimately this is about building platforms for the world of the Internet of Things and wearables. It’s better business to build platforms than products, and we’re starting to major technology companies try to find ways to become the gatekeepers for the Internet of Things. Salesforce already rolled out a developer kit for building wearable applications that connect to the Salesforce cloud. And earlier this week Google announced an API for Nest, which will allow developers to connect their own devices and applications to the company’s line of home automation products.



40 Years On, the Barcode Has Turned Everything Into Information


Bar Code

Hulton Archive/Getty



When Alan Haberman came to San Francisco to upend the global economy—which in the end he did—he wasn’t seeking venture capitalists or software engineers. This was the early 1970s, when a computer in every home was still just Steve Jobs’ teenage dream. Anyway, Haberman wasn’t a geek. He was a grocer.


According to his New York Times obituary, this mid-level supermarket executive needed to convince some fellow respectable businessmen to follow his lead. Haberman wanted grocery stores to embrace the 12-digit Universal Product Code—better known as the barcode—to create a standardized system for tracking inventory and speeding checkout. He took his fellow execs to a nice dinner. Then, as was the fashion at the time, they went to see Deep Throat. And they liked Haberman’s idea, these guys with wide lapels who changed the business of how Americans bought food—a change that over the past 40 years has come to mean so much more.


On June 26, 1974, at 8:01 a.m., Sharon Buchanan used a barcode to ring up a 10-pack of Juicy Fruit at the Marsh Supermarket in Troy, Ohio. A tectonic shift in the underlying economics of trade in tangible, physical goods of all kinds soon followed. Today, we celebrate the fortieth anniversary of this decisive moment — a moment whose universal impact can be seen in just how banal scanning a barcode has become.


Alan Haberman.

Alan Haberman. GS1 US



On June 26, 1974, at 8:01 a.m., Sharon Buchanan rang up a 10-pack of Juicy Fruit at the Marsh Supermarket in Troy, Ohio


Without the barcode, FedEx couldn’t guarantee overnight delivery. The just-in-time supply chain logistics that allow Walmart to keep prices low would not exist, and neither would big-box stores. Toyota’s revolutionary kanban manufacturing system depends on barcodes. From boarding passes to hospital patients, rental cars to nuclear waste, barcodes have reduced friction like few other technologies in the world’s slide toward globalization.


But putting barcodes on chocolate bars and instant oatmeal did more than revolutionize the economy, or the size of grocery stores. Thanks to bar codes, stuff was no longer just stuff. After a thing gets a barcode, that thing is no longer just itself. That thing now comes wrapped in a layer of information hovering just beyond sight in the digital ether. The thing becomes itself plus its data points, not just a physical object unto itself but tagged as a node in a global network of things. Barcodes serve up the augmented reality of the everyday, where everything can be cross-referenced with everything else, and everything has a number.


Haberman himself knew barcodes meant more than just a better way to manage supermarket inventory. He saw linguistics. He saw metaphysics. He also understood that those deeper abstract meanings held the key to barcodes’ radical practicality. “Go back to Genesis and read about the Creation,” Haberman once told The Boston Globe . “God says, ‘I will call the night “night”; I will call the heavens “heaven.”‘ Naming was important. Then the Tower of Babel came along and messed everything up. In effect, the U.P.C. has put everything back into one language, a kind of Esperanto, that works for everyone.”


In the mid-19th century, California’s railroad barons drove a golden spike through the preeminence of local trade. For most of the time humans have existed, what the average person could have depended almost entirely on where that person lived. The Transcontinental Railroad created the first physical network to break the consumer economy free from the constraints of location. Unglamorous folks like Haberman built an information network to overlay that physical network with an information network, midwifing the birth of a truly global economy in which technology gained final dominance over geography.


That loss of rootedness is what dystopianists see when they hold up the bar code as a talisman of cultural decay. When everything has a number, can our own commodification be far behind? What happens to individuality when we all become a function of our own data? A barcode tattoo has become a visual cliché, standard signifier of alienation. At the same time, nearly all babies born today in U.S. hospitals get barcode bracelets as soon as they’re swaddled.


When everything has a number, can our own commodification be far behind? What happens to individuality when we all become a function of our own data?


But for better or worse, the history of civilization is in many ways a history of taking inventory. Writing emerged out of the clay tablet ledgers of ancient Babylon. Phoenician sailors invented numbers as we know them to keep track of cargo.


wrigleys

Photo: Wrigley's



In 1949, grad school dropout Joe Woodland drew Morse code dots and dashes on a Florida beach, then drew vertical lines down from each character to tease out the first prototype of the modern barcode. Less than a century later, the physical world teems with metadata just waiting for a smartphone to reveal its “presence.” Even today’s barcodes themselves aren’t limited to information about an object’s price, owner or location, but can convey instructions to a 3-D printer to create the object itself. That pack of Juicy Fruit that Haberman helped send past the cash register is now in the Smithsonian. Perhaps the next pack of gum to enter the museum’s collection will be the one the food fabricator on your counter made for you when you pulled its barcode from an iPhone app and waved it past the scanner in your kitchen.


Barcodes did not merely speed up economic processes but opened up new spaces of economic possibilities, entirely new configurations that indeed changed the world of business but also the cultural and physical landscapes we all share. This simple technology accelerated the pace of globalization, not just by increasing the speed at which trade could take place but also by enabling entire industries to take on new shapes, to inhabit new forms. The evolution of the bar code has expanded the global economy’s capacity to evolve.


Metadata is becoming a ubiquitous feature of the physical world, a kind of second nature that will seem as natural to children born today as a video chat on a touchscreen tablet. Sheeted in this second skin of information, we ourselves are already in the process of inheriting and embodying the legacy of the barcode, sending the species spiraling into the new spaces of evolutionary possibility.



Snarky Lawmaker Reminds Former NSA Chief That Selling State Secrets Is Illegal


Keith Alexander, former director of the NSA, during his retirement ceremony March 28.

Keith Alexander, former director of the NSA, during his retirement ceremony March 28. Brendan Smialowski/Getty Images



Cybersecurity firms and snake-oil salesmen promising protection from online threats are ubiquitous these days, and it’s hard to stand out in such a crowded field—unless you’re the former leader of the world’s best hacking outfit. In that case, the promises you sell carry more weight—and a higher price tag.


Which may well explain why Gen. Keith Alexander, the former head of the NSA and U.S. Cyber Command, has launched the consulting firm IronNet Cybersecurity. It also may explain why a congressman has reminded the former spy that selling top secret info is a crime.


To capitalize on his recent departure from military intelligence—Alexander resigned in March following months of revelations by NSA whistleblower Edward Snowden—the general is offering his security expertise to the banking industry for the fire sale price of $600,000 per month after first asking for $1 million. There are threats everywhere, Alexander warns, and “It would be devastating if one of our major banks was hit, because they’re so interconnected.”


That may be, but Rep. Alan Grayson (D-Florida) is suspicious that Alexander has anything useful to offer at that price—unless, that is, he’s peddling national security secrets.


In letters sent Wednesday (.pdf) to the Securities Industry and Financial Markets Association, the Consumer Bankers Association, the Financial Services Roundtable and the Clearing House—all of which Alexander reportedly has approached about his services—Grayson made it clear to Alexander and those who might retain him that selling classified information is illegal.


“I am writing with concerns about the potential disclosure of classified information by former National Security Agency Director Keith Alexander,” Grayson wrote. “Disclosing or misusing classified information for profit is, as Mr. Alexander well knows, a felony.


“I question how Mr. Alexander can provide any of the services he is offering unless he discloses or misuses classified information, including extremely sensitive sources and methods,” Grayson continued. “Without the classified information he acquired in his former position, he literally would have nothing to offer to you.”


Grayson’s staff says the congressman has not yet received a response from Alexander or any of the organizations that received the letter.


“The Congressman is very interested in what they have to say,” said Matt Stoller, Grayson’s senior policy advisor, in an email to WIRED.


Alexander could not be reached for comment.



These Automakers Picked Android Auto Instead of Apple CarPlay


android-auto

Ariel Zambelich/WIRED



At Wednesday’s I/O conference, Google unveiled Android Auto, its answer to Apple’s CarPlay system. In compatible vehicles—the first of which are due by the end of 2014—users will be able to use their Android phone’s interface through the car’s dashboard screen to navigate with Google Maps, play Spotify, send texts, and more.


There are sneaky elements to Google’s plan for Android Auto, but don’t worry about that just yet. For now, all you really need to know is which carmakers support which platform.


As it stands now, drivers who really want a Fiat 500 or Audi A3 should make sure they’ve got an Android. Ferrari, BMW, and Mercedes-Benz customers should go with Apple, though we assume they can afford a few smartphones.


These guys are Android Auto exclusive so far:



  • Abarth

  • Acura

  • Alfa Romeo

  • Audi

  • Bentley

  • Chrysler

  • Dodge

  • Fiat

  • Infiniti

  • Jeep

  • Maserati

  • Mazda

  • RAM

  • Renault

  • Seat

  • Skoda

  • Volkswagen


And these brands are currently working with only Apple’s CarPlay:



  • BMW

  • Citroen

  • Ferrari

  • Jaguar-Land Rover

  • Mercedes-Benz

  • Peugeot

  • Toyota


A few brands have already signed up with both tech giants:



  • Chevrolet

  • Ford

  • Honda

  • Hyundai

  • Kia

  • Mitsubishi

  • Nissan

  • Opel

  • Subaru

  • Suzuki

  • Volvo


Since most car companies won’t want a potential customer’s smartphone preference to decide their choice of car, we expect this list to grow.



Why GoPro’s Success Isn’t Really About The Cameras


GoPro: GoPro pioneered the category of affordable high-def anywhere. Apple could launch an iPhone-GoPro hybrid in a matter of months.

GoPro’s Hero 3 camera Jae C. Hong/AP



Millions of people have used GoPro’s wearable cameras to record their every sky-diving, drone-flying, shark-riding adventure. But the San Mateo, California-based company might have just pulled off the greatest stunt of all with the biggest initial public offering of a consumer electronics company in more than 20 years.

There’s a good reason we haven’t seen any consumer electronics companies go public recently (Skullcandy’s $189 million IPO in 2011 is the most recent), and that’s because smartphones—and the gargantuan companies that make them—can do and build almost everything and anything. On any given day, our phones can act as a GPS system, a video game console, a fitness tracker, a stereo, a camera, and oh yeah, a telephone, all in one. To launch a standalone consumer electronics company that does just one thing, even if it can do that one thing really, really well, is a risky endeavor in the age of the smartphone.


All of that makes GoPro, which raised $427 million at a valuation of $2.96 billion in its IPO, something of an anomaly. After all, it was just a few years ago that the Flip Video camera, another gadget that dominated the camcorder market for a time, foundered, rendered obsolete by the proliferation of smartphones with ever-improving cameras. Even as the Flip floundered, however, GoPro, which sold its first camera in 2004, flourished. What separates GoPro from Flip is that all along, GoPro has sold consumers not on the camera, itself, but on something the smartphone can’t easily replace: the experience of using the camera.


“They don’t just sell a video camera, they sell the memory of the wave or the ski trip down the slope,” says Ben Arnold, a consumer technology industry analyst at The NPD Group. “I think we are entering an age where lifestyle in technology is becoming very important.”


That’s the reason, Arnold says, that brands like Beats and FitBit have done so well. They say something about the people who wear them. The iPhone might have been a status symbol when it was first introduced. Now, it’s a utility that says as much about its owner as the fact that she is wearing shoes. But when you see someone with one of those GoPro Hero 3 cameras strapped to her chest, it’s a signal to the world that she is about to do something awesome.


The company has its customers to thank for helping it build that reputation. Following the lead of GoPro’s thrill-seeking CEO and founder Nick Woodman, GoPro users have flooded the Internet with videos of their own adventures. In 2013 alone, GoPro customers uploaded 2.8-years worth of video featuring GoPro in the title, according to the company’s S-1 filing. In the first quarter of 2014, people watched over 50 million hours of videos with GoPro somewhere in the title, filename, tag, or description. Each video not only serves as a customer testimonial, but as guerrilla advertising, giving potential customers millions of reasons why they should buy one of GoPro’s clunky little cameras. And so, despite the fact that GoPro only sells cameras (and accessories and mounts for cameras), it became better known as an adventure sports brand than as a camera manufacturer.


GoPro has sold consumers not on the camera, itself, but on something the smartphone can’t easily replace: the experience of using the camera.


Now, the challenge ahead for GoPro is to start making money not just on cameras, but on the brand, too. In its S-1, the company admitted that it depends on camera sales “for substantially all of our revenue, and any decrease in the sales of these products would harm our business.” At the same time, the company wrote, “We do not expect to continue to grow in the future at the same rate as we have in the past.” That prediction is already coming true. Last year, GoPro’s year-over-year revenue growth fell to 87 percent from 125 percent the year before.


To avoid saturating the market, GoPro is now looking to turn itself into a media company. Already, it’s planning to launch a GoPro Channel on Xbox Live, and recently it made a deal with Virgin America to license videos for in-air entertainment. While GoPro hasn’t taken in any revenue from these deals yet, the company’s S-1 says that this year it will begin earning revenue from advertising on its Xbox, Virgin America, and YouTube channels.


If GoPro can make this transition successfully (and that’s a big if), it could serve as an exemplary model of what it takes to build a successful consumer electronics company in a smartphone-dominated world. As Adam Dornbusch, GoPro’s head of content distribution, recently summed it up for Variety , “The camera is just the tool to get to content.”



Why the Supreme Court May Finally Protect Your Privacy in the Cloud



When the Supreme Court ruled yesterday in the case of Riley v. California, it definitively told the government to keep its warrantless fingers off your cell phone. But as the full impact of that opinion has rippled through the privacy community, some SCOTUS-watchers say it could also signal a shift in how the Court sees the privacy of data in general—not just when it’s stored on your physical handset, but also when it’s kept somewhere far more vulnerable: in the servers of faraway Internet and phone companies.


In the Riley decision, which dealt with the post-arrest searches of an accused drug dealer in Boston and an alleged gang member in California, the court unanimously ruled that police need a warrant to search a suspect’s phone. The 28-page opinion penned by Chief Justice John Roberts explicitly avoids addressing a larger question about what’s known as the “third-party doctrine,” the notion that any data kept by a third party such as Verizon, AT&T, Google or Microsoft is fair game for a warrantless search. But even so, legal analysts reading between the opinion’s lines say they see evidence that the court is shifting its view on that long-stewing issue for online privacy. The results, if they’re right, could be future rulings from America’s highest court that seriously restrict both law enforcement’s and even the NSA’s abilities to siphons Americans’ data from the cloud.


Digital Is Different


The key realization in Roberts’ ruling, according to Center For Democracy and Technology attorney Kevin Bankston, can be summarized as “digital is different.” Modern phones generate a volume of private data that means they require greater protection than other non-digital sources of personal information. “Easy analogies of digital to traditional analog surveillance won’t cut it.”


Daniel Solove, a law professor at George Washington Law School, echoes that sentiment in a blog post and points to this passage in the opinion:



First, a cell phone collects in one place many distinct types of information—an address, a note, a prescription, a bank statement, a video—that reveal much more in combination than any isolated record. Second, a cell phone’s capacity allows even just one type of information to convey far more than previously possible. The sum of an individual’s private life can be reconstructed through a thousand photographs labeled with dates, locations, and descriptions.



That argument about the nature of digital collections of personal data seems to apply just as much to information held by a third party company as it does to information held in the palm of an arrested person’s hand. And Solove argues that could spell trouble for the third-party doctrine when it next comes before the Court. “The Court’s reasoning in Riley suggests that perhaps the Court is finally recognizing that old physical considerations—location, size, etc.—are no longer as relevant in light of modern technology. What matters is the data involved and how much it reveals about a person’s private life,” he writes. “If this is the larger principle the Court is recognizing today, then it strongly undermines some of the reasoning behind the third party doctrine.


The Court’s opinion was careful not to make any overt reference to the third-party doctrine. In fact, it includes a tersely-worded footnote cautioning that the ruling’s arguments about physical search of phones “do not implicate the question whether the collection or inspection of aggregated digital information amounts to a search under other circumstances.”


But despite the Court’s caveat, its central argument in the opinion—that the notions of privacy applied to analog data are no longer sufficient to protect digital data from warrantless searches—doesn’t limit itself to physical access to devices. And the opinion seems to hint at the Court’s thoughts on protecting one sort of remotely-stored phone data in particular: location data.


The Logic of Location Data


The Riley ruling cites an opinion written by Justice Sonia Sotomayor in the case of US vs. Jones, another landmark Supreme Court decision in 2012 that ended warrantless use of GPS devices to track criminal suspects’ cars. GPS devices, Sotomayor wrote at the time, create “a precise, comprehensive record of a person’s public movements that reflects a wealth of detail about her familial, political, professional, religious, and sexual associations.” Roberts’ reference to that opinion in Tuesday’s ruling seems to acknowledge that the sensitivity of GPS device data extends to phone location data too. And there’s little logical reason to believe that phone data becomes less sensitive when it’s stored by AT&T instead of in an iPhone’s flash memory.


With Riley and Jones, “we’ve now seen two indications that the Supreme Court is rethinking privacy for stored data,” says Alex Abdo, a staff attorney at the American Civil Liberties Union. “Neither raises the question directly, but they both contain clues into the mindset of the court, and they both suggest that there’s another victory for privacy in the waiting.”


“If I were to guess,” Abdo adds, “I would predict that the Supreme Court will make good on its suggestion that the third-party doctrine doesn’t make sense in the context of cloud storage.”


The ripples from Riley may extend to the NSA’s surveillance practices, too.


The ripples from Riley may extend to the NSA’s surveillance practices, too, says Jennifer Granick, director of Civil Liberties at Stanford Law School’s Center for Internet and Society. She points out that the NSA has used the same third-party doctrine arguments to justify its collection of Americans’ phone data under section 215 of the Patriot Act. “What will this mean for the NSA’s bulk collection of call detail records and other so-called ‘metadata’?” she asks in a blog post. “The opinion suggests that when the Court has that question before it, the government’s approach may not win the day.”


Thanks to the caveat footnote limiting its significance to physical searches of phones, the Riley ruling likely won’t set any precedent useful for privacy activists just yet. But the CDT’s Kevin Bankston says it hints that the Supreme Court has acknowledged the need for new privacy protections in the age of mobile computing. “The Court is clearly concerned with allowing access to data in the cloud or on cell phones without a warrant. And that’s likely indicative about how they’ll approach things like cell phone location tracking and NSA surveillance in the future,” Bankston says. “The fourth amendment for the 21st century will be quite different from the fourth amendment in the 20th century.”



Productive Bugs


Windows_9X_BSOD


Software is buggy. As computer programs become more complicated and more interconnected, the likelihood that there is a problem somewhere just has to increase. In general, these bugs are bad, causing things to break or otherwise create problems. But every now and then, some software bugs can do something special.


If you’ve ever done any computer programming, you’ll realize how easily one can create code that is more complex than you might realize and prone to bugs. In high school, I took a computer programming course. As a side project, I decided to see if I could create a rudimentary computer game. And by rudimentary, I mean that you controlled a circle that moved horizontally and could shoot at other circles. The code I wrote for the laser blast was supposed to look like a snazzy beam moving towards its target. Instead, for reasons I don’t think I ever entirely understood, the code ended up creating a rainbow-like beam that lanced across the screen. It was a much more impressive effect, completely unexpected, and not entirely understandable, at least to a novice teenage programmer.


This bug wasn’t what I expected, but was “productive” in the sense that it generated something beautiful. I learned of a bug similar to my own when I was reading the book The Fractal Geometry of Nature by Benoit Mandelbrot. In it, Mandelbrot has a fun little aside where he shows a Cubist-style computationally generated artwork. He notes that the artwork was constructed from incorrectly written computer code, designed initially to do something completely different.


But there can be productive bugs for a variety of reasons, unrelated to aesthetics. There’s Galaga, a classic shooter from the Eighties where your trusty spaceship had to eliminate all the bad guys. It was one of those archaic video games with simple graphics and goofy music, but also it had an intriguing glitch. Early on in the gameplay, if you eliminated nearly all enemies, and then avoided them for several minutes, the baddies would never shoot at you again. A curious situation but one that could be nicely exploited for some satisfying high scores.


Why does this happen? The variables that hold the “shots” get confused under certain conditions and neglect to refresh. Some speculate that this was an intentional feature of the game, to allow its developer to enter an arcade and rack up high scores. But whether intentional or not, it was often exploited to get lots of points.


Or there is the case of a glitch in the online game World of Warcraft that effectively mimicked a biological virus, sweeping through thousands of players and acting like an unintentional plague—one due to an unexpected error residing in the sophistication of the program. This sounds completely negative, but it did have one very interesting productive feature: it allowed epidemiologists to study how an outbreak occurs, and has resulted in scientists discussing whether to use multiplayer games as platforms for research.


Perhaps we can view software bugs the same way biological mutations are viewed. Taking a page from molecular genetics, some mutations don’t do anything (these are neutral mutations) while others are deleterious and are the ones that most easily map onto the software bugs we are familiar with. But other mutations provide the grist for natural selection, as beneficial mutations. And so too might be some software bugs. Bugs can end up being productive, whether for artistic, prestige, or scientific reasons.


Top image:Wikimedia Commons/Public Domain



Potent neurotoxin found in flatworm: Neurotoxin tetrodotoxin found in terrestrial environment for first time

The neurotoxin tetrodotoxin (TTX) has been found for the first time in two species living out of water, according to a study published June 25 in the open access journal PLOS ONE by Amber Stokes from California State University, Bakersfield, California and colleagues.



Tetrodotoxin is a potent paralysis-inducing neurotoxin found in a multitude of aquatic organisms, but until now has not been found in terrestrial invertebrates. TTX is thought to originate from marine bacteria and accumulate in certain organisms through ingestion -- pufferfish for example -- but the origins and ecological functions of TTX in most taxa remain mysterious. The authors of this study found that two invasive species of terrestrial flatworm exhibit behaviors indicating possible use of a toxin like TTX to subdue large earthworm prey. The researchers analyzed TTX levels in two species of terrestrial flatworm and investigated its distribution throughout their bodies.


Using analytical characterization techniques, the authors confirmed TTX's presence in the flatworms and use of TTX during predation to subdue large prey. Additionally, they found TTX in the egg capsules of one of the flatworms, which may indicate a possible further role in defense.


According to the authors, these data suggest for the first time a potential route for TTX bioaccumulation in terrestrial systems.


Amber Stokes added, "This study is important as it is the first to show tetrodotoxin in a terrestrial invertebrate, which will allow further study of the production or accumulation of tetrodotoxin in terrestrial systems."




Story Source:


The above story is based on materials provided by PLOS . Note: Materials may be edited for content and length.



New material improves wound healing, keeps bacteria from sticking

As many patients know, treating wounds has become far more sophisticated than sewing stitches and applying gauze, but dressings still have shortcomings. Now scientists are reporting the next step in the evolution of wound treatment with a material that leads to faster healing than existing commercial dressings and prevents potentially harmful bacteria from sticking. Their study appears in the journal ACS Applied Materials & Interfaces.



Yung Chang and colleagues note that the need for improved dressings is becoming urgent as the global population ages. With it, health care providers will see more patients with bed sores and associated chronic skin wounds. An ideal dressing would speed up healing in addition to protecting a wound from bacterial infection. But current options fall short in one way or another. Hydrogels provide a damp environment to promote healing, but they don't allow a wound to "breathe." Dry films with tiny pores allow air to move in and out, but blood cells and bacteria can stick to the films and threaten the healing process. To solve these problems all at once, Chang's team looked to new materials.


They took a porous dry film and attached a mix of structures called zwitterions, which have been used successfully to prevent bacteria stickiness in blood filtering and other applications. The resulting material was slick to cells and bacteria, and it kept a moist environment, allowed the wound to breathe and encouraged healing. When the scientists tested it on mice, their wounds healed completely within two weeks, which is faster than with commercial dressings.




Story Source:


The above story is based on materials provided by American Chemical Society . Note: Materials may be edited for content and length.



Depends on what you mean by “know” [Pharyngula]


Chris Mooney is galloping around on his anti-science education hobby-horse again. That’s a harsh way to put it, but that’s what I see when he goes off on these crusades for changing everything by modifying the tone of the discussion. It’s all ideology and politics, don’t you know — if we could just frame our policy questions and decisions in a way that appealed to the conservative know-nothings, we’d be able to make progress and accomplish things. And, as usual, I expect he won’t recognize the irony of the fact that the way he communicates his message alienates scientists and science communicators.


He’s reporting on the work of Dan Kahan, who has done interesting and informative work on how ideology, both left and right, distorts decision making. Motivated reasoning is a real problem, and we all need to be aware of it. But this work, at least as described by Mooney, goes a step further to argue that conservatives aren’t as dumb as they seem — that they know the science, but are using politics and identity to dictate their answers. They already know the science, so teaching them how the science actually works can’t possibly be the answer — instead, we have to work around their biases and lead them by careful wording towards the resolution of real problems. Kahan says,



The problem is not that members of the public do not know enough, either about climate science or the weight of scientific opinion, to contribute intelligently as citizens to the challenges posed by climate change. It’s that the questions posed to them by those communicating information on global warming in the political realm have nothing to do with—are not measuring—what ordinary citizens know.



I disagree. The public does not know enough. I don’t think Kahan or Mooney have a clear idea of what they mean by “know”. And I don’t think they’re recognizing that if they believe they are clever enough to trick the public into revealing their true knowledge by rephrasing questions about science, that perhaps the public is also clever enough to hide their true ideas about science in their answers.


They’ve evaluated public knowledge of science with sets of multiple choice questions phrased in two different ways, to show that the answers you get vary with the wording. First: speaking as a teacher, multiple choice questions are terrible at testing in-depth knowledge and understanding. They’re fine for evaluating basic facts, but even there, they can be gamed. Often, the strategy for answering multiple choice facts isn’t necessarily based on knowledge of the material, but understanding human nature and the psychology of the person who wrote the test — the wording of the question and the alternative answers can be a good clue to which one the instructor thinks is best.


Second, we’ve known about this phenomenon for a fairly long time. About ten years ago, I heard Eugenie Scott explain how soft polls on evolution were: that by changing the wording from “Humans evolved over millions of years” to “Dogs evolved over millions of years”, you could get a tremendous improvement in the percentage of respondents approving of the statement.


Kahan has discovered that you get the same improvement from conservatives if you change “the earth is warming” to “climate scientists believe the earth is warming,” testifying to the fact, Mooney says, that they actually do know what the science says, it’s just that phrasing question wrong punches their button and causes them to reject the idea.


Bullshit. Look, I know creationist arguments inside and out; I can often finish their sentences for them, and can even cite the original sources that they didn’t know their claims came from. This does not in any way imply that I think like a creationist, that I’m ready to accept creationism, that I sympathize or agree with their position, or that I think creationism ought to be considered as a source of facts in public policy. I know what they say, but I also know all the arguments against their nonsense. That a climate change denialist is able to regurgitate what he’s heard a scientist say does not mean he is not also packed to the gills with lies and rationalizations; that he’s able to check a box on a paper exam does not mean that he won’t act against that fact in his public activities.


I’ve also talked at length, hours on end, with creationists. And no, I’m sorry, despite being able to puke up quotations from what scientists actually say, they really are grossly ignorant of evolution. Are we going to start using quote-mining as an example of the scientific process?


Another example from teaching genetics. I once assigned a problem of medium difficulty on a homework assignment, involving Mendelian crosses of flies with different wing shapes. A little later I had the students do the exact same problem in an in-class exercise — a way to spot check whether they’d actually worked through the problem. Easy peasy, they breezed through it in class, and the students I asked could even explain the process for solving it. Then, on an exam, I repeated the very same problem, except that I changed every mention of Drosophila to Danio, and changed the hypothetical phenotypes from wing shape to fin shape. But the numbers, the crosses, the outcomes were all copied directly from the homework. All the changes were superficial.


A third of the class bombed it.


Did these students know how to solve the problem? I suppose Mooney could claim that they knew how to do fruit fly genetics, but simply didn’t know the details of fish genetics. But I would say no, not at all; that they could reiterate the procedure they memorized in one problem does not in any way imply that they could understand the concepts. It was the same damned problem! The students who could repeat an answer in one very narrow context did not know the science. They were unable to generalize and apply a conceptual understanding to a specific problem.


(For those of you concerned about my students, this is a common problem; a lot of what I’m doing in the classroom and exams is taking ideas they’ve grown comfortable with and twisted them a little bit to compel them to THINK about the problem, rather than trying to find which rut in their brain it fits best. Learning has to be procedural and general, not liturgical. They mostly get it eventually, oh, but how they suffer through the exams. “This wasn’t in the homework or the class examples!” is a common complaint, to which I reply, “Of course not.”)


Mooney likes to cite empirical, practical results of his approach, which is good…but unfortunately, they always undermine his premises, and he sometimes isn’t even aware of it.



Later in the paper, Kahan goes on to assert that precisely this strategy is working right now in Southeast Florida, where members of the Regional Climate Change Compact have brought on board politically diverse constituencies by studiously avoiding pushing anyone’s buttons. Kahan even shows polling data suggesting that questions like "local and state officials should be involved in identifying steps that local communities can take to reduce the risk posed by rising sea levels" do not provoke a polarized response in this region. Rather, liberals and conservatives alike in Southeast Florida agree with such a statement, which references a major consequence of climate change while ignoring the gigantic elephant in the room…its cause.



I’ve emphasized that las bit, because it is so damning. What good is this approach? If you know anything about science at all, you understand that how we know what we know, the epistemology of science, is absolutely critical to our progress. You’re stuck like my students in the early part of the semester, able to tick off check boxes on a multiple choice test or follow a cookbook procedure to arrive at a specific answer, but unable to generalize or extend their knowledge to new problems (really, let me assure you though, most of them got much better at that by the end of the term!). Those respondents in Florida don’t understand the science — all the participants know is which buttons to push, which ones to avoid, with the aim of steering the poor stupid mouse through the maze to a cheese award at the end.


OK, to be fair, this is a case where Mooney is at least vaguely aware of the problem. Here’s his next paragraph.



Here’s the problem, though. Maybe this approach will work up to a point, or in certain locales (in North Carolina, the response to sea level rise is pretty different). But at some point, we really do need to all agree that the globe is warming, so that we can then make very difficult choices on how to deal with that. To save our feverish planet, it is dubious that merely having conservatives know what scientists think—rather than accepting it themselves, taking the reality into their hearts and identities—will be enough.



Very good. So why did Mooney write a whole column arguing that conservatives aren’t really as anti-science as they seem to be, that ends with an acknowledgment that, well, not knowing how reality works isn’t a good long-term strategy for responding to challenges from reality? The entire first 90% of the article is bogged down with this misbegotten notion that we can equate science understanding with checking the right alternative on a multiple choice test, only to notice in the last paragraph that oh, hey, that’s not science.



Why the Best Designers Don’t Specialize in Any One Thing


design-generals-inline

mustafahacalaki/Getty



The digital world is at an inflection point, and the implications demand that organizations—from big companies to startups to marketing agencies—hire designers who are smart generalists.


Think about the moment we’re in: mobile, big data and personalization are converging to drive truly novel user experiences across countless new channels and in real life. In this post-screen world, the lines between the physical and the digital blur, and everything from your heartbeat to your thermostat connects to everything else. It’s a world of experiences, less and less dependent on any one platform, device, interface or technology. The best designers for this new environment are those who can confidently navigate change by adapting, not those who cling to whatever specialty in which they were formally trained or have the most experience.



Josh Payton


Josh Payton is the Vice President of User Experience at Huge, Inc., a digital agency specializing in design that makes lives better.




The Shaping of a Generalist


I discovered I wanted to be a designer thanks to my middle school art teacher. He fit the stereotype: an easygoing surfer type with long hair. He reeled us in with topics that were appealing to 13-year-olds, like comic book art and Roy Lichtenstein. He also had a professional painting and airbrush business on the side and did the gymnasium floors, walls, signs, and parking lots for every school in the district. By teaching us some of those skills – how to use an airbrush and touch up magazine photos – he showed us that if comics and movie posters can be art, then maybe the stuff in old museums can be fun.


But I couldn’t be an airbrush artist today if I wanted to. Digital photography and editing tools have rendered that entire design specialty obsolete. It now seems ridiculous that designers once used mechanical tools to lay out magazine spreads and physically painted photographs to touch them up. Luckily, though, my teacher did more than relay a specialty; he taught us to be curious and inventive and to find connections between seemingly disparate things. In other words, my art teacher taught us to be smart generalists.


Little did he know that he was preparing us well to thrive in the very future state of affairs that would make his own expertise useless. Today’s innovations demand that we design with the unknown, the conjectural and the hypothetical in mind. Think about it: even the more complex interactions and interfaces made possible by mobile in recent years focus largely on real-time moments; one frame in the movie of someone’s life. But as personalization and predictive analytics work to anticipate what’s next, the emerging ecosystem will extend into the user’s future.


As a User Experience Designer, I believe that the best digital minds move deftly across specialties–a task that takes not just subject matter, but process, products, context and user needs into account, always. Consider, then, the smart generalists who will design these holistic offerings as experience editors. Where will they come from?


Knowing a Smart Generalist When You See Her Resume


I’m often faced with two types of job applicants. One has years of experience, an impressive portfolio of work and a specialty that took years to hone. That candidate discusses their job history engagingly, within the parameters of what is known and what has come before. The other candidate is young—sometimes almost ridiculously so—and is only held back by a lack of experience. That candidate never talks about history, but about what she wants to learn, where she thinks the world is going, and what kinds of products she wants to develop there. The second candidate is the smarter hire.


As digital experiences become more ubiquitous and harder to separate from analog ones, the field of user experience is transforming. Those who treat UX like a rigid set of tasks and capabilities will be left behind, outpaced by digital natives who intuitively break down disciplinary boundaries. In delivering the experiences that will increasingly be demanded by users, everyone on the team—even the specialists—will need to understand what others are doing and think more holistically, especially managers. No one should look at a plan on paper without being able to imagine the experience it’s meant to produce with some understanding of how to actually sit down and build it.


Those who treat UX like a rigid set of tasks and capabilities will be left behind, outpaced by digital natives who intuitively break down disciplinary boundaries.


Perhaps most importantly, every team member should adapt an experimental mindset, ready to iterate over time and to address user needs in new ways. There’s no time to agonize until a plan is perfect; designers need to design for extensibility, streamline development, release great products faster, and learn as much as they can in the process. In many of these new use cases, we’ll be designing things that have no predecessor, so constant tinkering and improvement will be key. As the Marines—the generalists of the military—say: improvise, adapt and overcome.


In his own quirky way, my analog art teacher instilled that kind of digital-ready thinking in my classmates and me. Today’s teachers, from the elementary level up through the design schools, would be wise to emulate him. Because these digital polymaths are the ones who will make our brave new world work for users.



Angry Nerd: The French Comics That’d Make for Better Movies Than Marvel Heroes


It’s hard for him to admit, but Angry Nerd has had his fill of Marvel and DC Comics blockbusters. But that doesn’t mean he’s tired of comic book flicks—perish the thought! Instead of another big-budget movie with an oversaturated superhero, why not look to sophisticated French graphic novels for source material? Case in point: the excellent new comic-to-silver-screen adaptation Snowpiercer.



Back Up WhatsApp to Keep Your Precious Messages Forever


whatsapp-inline

WIRED



Last week, I finally banished the Phone app from my iPhone dock and put Whatsapp in its place. I don’t remember the last time I made a phone call, but I probably launch Whatsapp close to one hundred times everyday. There are dozens of Whatsapp groups always screaming for attention—a grade school friends group, college classmates, family members, and more. In all, I have over 84,000 messages and nearly 1,800 pictures in Whatsapp. That’s a fair chunk of data that would be gone forever if I ever lost my phone, or dropped it in the toilet.


So can you back up Whatsapp? Absolutely!


On an iPhone, it’s really straightforward. Whatsapp uses iCloud to back up not just text messages but also all incoming and outgoing media like photos, videos, and voice messages.


To initiate a backup, tap the Settings button on the menu bar at the bottom of Whatsapp. Head over to Chat Settings, then tap Chat Backup. Here, you can manually start a backup by tapping the Back Up Now button. Or you can set Whatsapp to automatically back up your stuff daily, weekly, or monthly. Whatsapp will only start to back up automatically if your phone is plugged in. But make sure you’re connected to a Wi-Fi network, because it will happily start backing itself up over your data plan! Backups can be huge—hundreds of MBs if you’re an active user.


If you change iPhones, you’ll be promoted to restore Whatsapp from a backup the first time you install it. As long as you’re using the same iCloud account, you should be pick up right where you left off.


On an Android, things get slightly more complicated. If your phone has a microSD card, that’s what Whatsapp will back up to by default. To start a backup, open Whatsapp and hit the Menu button. Navigate to Settings —> Chat Settings and then tap on Backup Conversations. Simply move over the microSD card to your new phone to restore Whatsapp conversations.


If your Android phone doesn’t have a microSD card, go through the same steps above. Whatsapp will back up to your phone’s internal memory at this path: /sdcard/WhatApp/ You will need to transfer this folder from your old phone to the same folder on your new phone by copying it to a computer.


Theoretically, the Android method will save you in a pinch, but it relies on you manually copying stuff back and forth between devices. Seriously, who has the time for that?


There are many things wrong with iCloud, but in this case, it is a thing of beauty.



The Week’s Best Trailers: Hunger Games! Simon Pegg! Fake Heads!


In this week’s best trailers, Lionsgate revs up its Hunger Games engine, a crowdfunded festival favorite pulls no punches, and Simon Pegg goes globetrotting. At this point in the juggernaut, you have to assume anything Hunger Games is just gonna steamroll everything—but despite Donald Sutherland’s best efforts at selling Panem, we’ll take Pegg’s journey over the Capitol any day.


The One Everyone is Talking About: The Hunger Games: Mockingjay—Part 1



The Best GIFs and Memes From the World Cup So Far




It’s been exactly two weeks since the World Cup—the quadrennial international soccer tournament that requires at least one country to be up way past its bedtime for an entire month—began in Brazil. And, not too surprisingly, it’s been a wonderful hot mess of action. So far, someone’s gotten bit, someone else nearly lost a nose, and the entire United States collectively looked at one another and went, “We might make it to the next round!” And that’s just the beginning.


Meanwhile, talk of the World Cup has (much to the dismay of non-soccer fans) overtaken social media and, like other major sporting events before it, become fertile ground for the making of memes and GIFs. The latter format, of course, is almost perfectly suited to soccer, which has long periods of calm ball control punctuated by near-miraculous goals and near-fatal collisions. In honor of the U.S. team’s big match against Germany today, we decided to collect some of our favorite GIFs and memes from the World Cup so far (we’re only halfway through the tournament, though, so don’t presume this is the only time we’ll post these). Check them out in the gallery above.