Clues to superbug evolution: Microbiologists sequence entire genome of a Klebsiella pneumoniae strain

Imagine going to the hospital with one disease and coming home with something much worse, or not coming home at all.



With the emergence and spread of antibiotic-resistance pathogens, healthcare-associated infections have become a serious threat. On any given day about one in 25 hospital patients has at least one such infection and as many as one in nine die as a result, according to the Centers for Disease Control and Prevention.


Consider Klebsiella pneumoniae, not typically a ferocious pathogen, but now armed with resistance to virtually all antibiotics in current clinical use. It is the most common species of carbapenem-resistant Enterobacteriaceae (CRE) in the United States. As carbapenems are considered the antibiotic of last resort, CREs are a triple threat for their resistance to nearly all antibiotics, high mortality rates and ability to spread their resistance to other bacteria.


But there is hope. A team of Sandia National Laboratories microbiologists for the first time recently sequenced the entire genome of a Klebsiella pneumoniae strain, encoding New Delhi Metallo-beta-lactamase (NDM-1). They presented their findings in a paper published in PLOS One, "Resistance Determinants and Mobile Genetic Elements of an NDM-1 Encoding Klebsiella pneumoniae Strain."


The Sandia team of Corey Hudson, Zach Bent, Robert Meagher and Kelly Williams is beginning to understand the bacteria's multifaceted mechanisms for resistance. To do this, they developed several new bioinformatics tools for identifying mechanisms of genetic movement, tools that also might be effective at detecting bioengineering.


"Once we had the entire genome sequenced, it was a real eye opener to see the concentration of so many antibiotic resistant genes and so many different mechanisms for accumulating them," explained Williams, a bioinformaticist. "Just sequencing this genome unlocked a vault of information about how genes move between bacteria and how DNA moves within the chromosome."


Meagher first worked last year with Klebsiella pneumoniae ATCC BAA-2146 (Kpn2146), the first U.S. isolate found to encode NDM-1. Along with E.coli, it was used to test an automatic sequencing library preparation platform for the RapTOR Grand Challenge, a Sandia project that developed techniques to allow discovery of pathogens in clinical samples.


"I've been interested in multi-drug-resistant organisms for some time. The NDM-1 drug resistance trait is spreading rapidly worldwide, so there is a great need for diagnostic tools," said Meagher. "This particular strain of Klebsiella pneumoniae is fascinating and terrifying because it's resistant to practically everything. Some of that you can explain on the basis on NDM-1, but it's also resistant to other classes of antibiotics that NDM-1 has no bearing on."


Unlocking Klebsiella pneumoniae


Assembling an entire genome is like putting together a puzzle. Klebsiella pneumoniae turned out to have one large chromosome and four plasmids, small DNA molecules physically separate from and able to replicate independently of the bacterial cell's chromosomal DNA. Plasmids often carry antibiotic resistant genes and other defense mechanisms.


The researchers discovered their Klebsiella pneumoniae bacteria encoded 34 separate enzymes of antibiotic resistance, as well as efflux pumps that move compounds out of cells, and mutations in chromosomal genes that are expected to confer resistance. They also identified several mechanisms that allow cells to mobilize resistance genes, both within a single cell and between cells.


"Each one of those genes has a story: how it got into this bacteria, where it has been, and how it has evolved," said Williams.


Necessity leads to development of new tools


Klebsiella pneumoniae uses established mechanisms to move genes, such as "jumping genes" known as transposons, and genomic islands, mobile DNA elements that enable horizontal gene transfer between organisms. However, the organism has so many tricks and weapons that the research team had to go beyond existing bioinformatics tools and develop new ways of identifying mechanisms of genetic movement.


Williams and Hudson detected circular forms of transposons in movement, which has never been shown this way, and discovered sites within the genome undergoing homologous recombination, another gene mobilization mechanism. By applying two existing bioinformatics methods for detecting genomic islands, they found a third class of islands that neither method alone could have detected.


"To some extent, every extra piece of DNA that a bacteria acquires comes at some cost, so the bacteria doesn't usually hang onto traits it doesn't need," said Hudson. "The further we dug down into the genome, the more stories we found about movement within the organism and from other organisms and the history of insults, like antibiotics, that it has faced. This particular bacteria is just getting nastier over time."


Applying findings to future work


The findings are being applied to a Laboratory Directed Research and Development project led by Sandia microbiologist Eric Carnes, who is examining alternative approaches for treating drug-resistant organisms. "Instead of traditional antibiotics, we use a sequence-based approach to silence expression of drug-resistant genes," said Meagher.


The researchers also are applying their understanding of Klebsiella pneumoniae's mechanisms of resistance and their new bioinformatics tools to developing diagnostic tools to detect bioengineering. Looking across 10 related but distinct strains of Klebsiella pneumoniae, they pinpointed regions that were new to their strain, and so indicate genetic movement.


"By studying the pattern of movement, we can better characterize a natural genomic island," said Hudson. "This leads down the path of what does an unnatural island look like, which is an indication of bioengineering. We hope to apply the knowledge we gained from sequencing Klebsiella pneumoniae to developing diagnostic tools that could detect bioengineering."



Recreating the stripe patterns found in animals by engineering synthetic gene networks

Pattern formation is essential in the development of animals and plants. The central problem in pattern formation is how can genetic information be translated in a reliable manner to give specific spatial patterns of cellular differentiation.



The French-flag model of stripe formation is a classic paradigm in developmental biology. Cell differentiation, represented by the different colours of the French flag, is caused by a gradient of a signalling molecule (morphogen); i.e. at high, middle or low concentrations of the morphogen a "blue," "white" or "red" gene stripe is activated, respectively. How cellular gene regulatory networks (GRNs) respond to the morphogen, in a concentration-dependent manner, is a pivotal question in developmental biology. Synthetic biology is a promising new tool to study the function and properties of gene regulatory networks (GRNs) by building them from first principles. This study developed synthetic biology methods to build some of the fundamental mechanisms behind stripe formation.


In previous studies, gene circuits with predefined behaviors have been successfully built and modeled, but mostly on a case-by-case basis. In this study published in Nature Communications, researchers from the EMBL/CRG Systems Biology Research Unit at the CRG, went beyond individual networks and explored both computational and synthetic mechanisms for a complete set of 3-node stripe-forming networks in Escherichia coli. The approach combined experimental synthetic biology led by Mark Isalan, now Reader in Gene Network Engineering at the Department of Life Sciences of Imperial College London with computational modelling led by James Sharpe, ICREA Research Professor and head of the Multicellular Systems Biology lab at the CRG.


"We have performed a very innovative and ambitious study: we applied a three-step approach for the effective exploration and creation of successful synthetic gene circuits. We created a theoretical framework to study the GRNs exhaustively" -- 100,000 versions of over 2800 networks were simulated on the computer. We then successfully developed a synthetic network engineering system and, finally, we confirmed all the new experimental data by fitting it to a single mathematical model" explains the corresponding author James Sharpe.


First, Andreea Munteanu, co-author of the study, performed a theoretical screen for finding all design classes that produce the desired behaviour (stripe formation in a morphogen gradient). During this step she discovered four fundamentally-different mechanisms for forming a stripe. Next, Yolanda Schaerli, first author of the study, successfully demonstrated that the four networks are functional by building them in the bacteria E. coli using the tools of synthetic biology. The third step was to verify the distinct mechanisms by fitting all the experimental data to a mathematical model.


The success of this procedure allowed the researchers to go one step further to find a deeper design principle of stripe formation. They identified a simpler 2-node network -- where the stripe gene is directly controlled by both activation and repression from the morphogen sensor gene- that replicates the stripe-forming ability in its simplest form. They were successful in building this archetype of stripe forming networks and ultimately discovered that it can even display an "anti-stripe" phenotype (fig. 2).


"Combining exhaustive computational modeling with synthetic biology is more efficient and powerful than building networks one-by-one" says the corresponding author Mark Isalan. "Our approach provides a new and efficient recipe for synthetic biology -- a new scientific discipline which aims to engineer all kinds of useful biological systems."




Story Source:


The above story is based on materials provided by Center for Genomic Regulation . Note: Materials may be edited for content and length.



Pneumonia: More precise diagnosis method developed by interdisciplinary research team

A patient survives life-threatening trauma, is intubated in the intensive care unit (ICU) to support his or her affected vital functions, starts to recover, and then develops pneumonia. It's a scenario well-known to physicians, who understand that the development of ventilator-associated pneumonia in critically ill patients often results in significant morbidity, mortality, and additional health care costs.



An interdisciplinary team of George Washington University (GW) researchers are investigating more accurate and rapid methods of identification of bacterial pathogens in patients with pulmonary infections, which could lead to more targeted antimicrobial therapy with potentially less adverse effects and lower costs. Next-generation sequencing (NGS) of samples from the sputum of intubated patients, as outlined in their recently published paper in the Journal of Clinical Microbiology, may enable more focused treatment of pneumonia in the critically ill, which has the potential to reduce health care spending, as well as improve survival.


"Currently, patients who develop pneumonia after entering the ICU are subjected to broad-spectrum antibiotics, which adds costs, potentially increases the risk of development of antimicrobial resistance, and creates a greater likelihood of an adverse effect attributable to the antibiotics," said co-author Gary Simon, M.D., Ph.D., Walter G. Ross Professor of Medicine and director of the Division of Infectious Diseases at the GW School of Medicine and Health Sciences (SMHS). "In our paper, we show these methods could improve if we establish a more precise microbiologic cause."


NGS, or the process of determining the DNA sequence of a patient's genome and microbiome, provides the means to establish a more precise microbiologic cause, according to co-author Timothy McCaffrey, Ph.D., professor of medicine and director of the Division of Genomic Medicine at GW SMHS.


"Through analyzing the data provided by the NGS, we were able to identify bacteria not previously identified through standard microbiological methods," said McCaffrey.


As technical advances reduce the processing and sequencing times, NGS-based methods may ultimately be able to provide clinicians with rapid, precise, culture-independent identification of bacterial, fungal, and viral pathogens and their antimicrobial sensitivity profiles.


"This will allow for a more precise patient population to be treated for pneumonia," said co-author Marc Siegel, M.D., assistant professor of medicine at GW SMHS. "Using this technology, physicians in the future should be able to make a more accurate diagnosis of the cause of what the pneumonia is and tailor their therapy accordingly."


Ian Toma, M.D., Ph.D., MSHS, visiting assistant professor in the Division of Genomic Medicine and Department of Physical Therapy and Health Care Sciences at GW SMHS, developed the NGS procedure using the most advanced sequencing methods available. "It was a challenging proof-of-concept study and a truly interdisciplinary translational research effort that will likely be implemented into clinical practice within the near future," he said.


Keith Crandall, Ph.D., director of the Computational Biology Institute -- a new interdisciplinary research strategic initiative at GW -- and his group of researchers contributed to the NGS data analysis with their unique bioinformatics tool "PathoScope," a promising application for identification of pathogens.


"Our tool provides a powerful statistical approach for sifting through NGS data and quickly identifying and characterizing pathogens from a patient's sample," said Crandall. "This is truly 'personalized medicine' as we identify specific strains of bacteria infecting individual patients and provide physicians with targeted information for antibiotic treatments for each individual."




Story Source:


The above story is based on materials provided by George Washington University . Note: Materials may be edited for content and length.



Termites evolved complex bioreactors 30 million years ago

Achieving complete breakdown of plant biomass for energy conversion in industrialized bioreactors remains a complex challenge, but new research shows that termite fungus farmers solved this problem more than 30 million years ago. The new insight reveals that the great success of termite farmers as plant decomposers is due to division of labor between a fungus breaking down complex plant components and gut bacteria contributing enzymes for final digestion.



Sophisticated management in termite fungus farms


Fungus-farming termites are dominant plant decomposers in (sub)tropical Sub-Saharan Africa and Southeast Asia, where they in some areas decompose up to 90% of all dead plant material. They achieve near-complete plant decomposition through intricate multi-stage cooperation between the Termitomyces fungi and gut bacteria, with the termites managing these symbionts by providing gut compartments and nest infrastructure. Researchers at the Centre for Social Evolution, Department of Biology, University of Copenhagen and Beijing Genomics Institute (BGI, China) discovered this by analyzing plant decomposition genes in the first genome sequencing of a fungus-farming termite and its fungal crop, and bacterial gut communities.


Termites manage their fungus farm in a highly structured way. Older termite workers collect plant material and bring it to the nest. Younger workers eat the plant material together with Termitomyces fungalspores, and this plant-spore mix is defecated as a new layer of fungus garden. Within the garden, Termitomyces rapidly grows on the plant substrate until it is utilized, after which older termites consume the fungus garden. By then, nearly all organic matter has been broken down.


"While we have so far focused on the fungus that feeds the termites, it is now clear that termite gut bacteria play a major role in giving the symbiosis its high efficiency," says Associate Professor Michael Poulsen, who spearheaded the work. "But it took a massive effort of sequencing the genome of the termite itself, its fungus, and several gut metagenomics to analyze the enzymes involved in plant decomposition," adds Assistant Professor Guojie Zhang, who made the genome sequencing happen at BGI Shenzhen.


A symbiotic community optimized for efficient plant decomposition


A remarkable 86% of all the glycoside hydrolase enzyme families known from living organisms were present in the farming symbiosis. The fungus coded for enzymes needed to handle complex carbohydrates, while gut microbes contributed enzymes for the final digestion of oligosaccharides. The first gut passage, thus, mainly serves to inoculate the plant substrate with fungal spores, while gut bacteria play a prominent digestive role during second gut passage.


Termite colonies are founded by a single queen and king that disperse by air, but lose their wings when locking themselves up for life in an underground royal chamber. As the colony grows, the queen swells up to gigantic proportions and becomes an egg-laying machine.


The royal pair may survive for decades and maintain a very large colony of short-lived workers and soldiers, who take care of all colony duties. The metagenomic analyses of the queen gut showed that it contained a highly simplified bacterial community lacking plant decomposition enzymes. This suggests that the royal pair is exempt from decomposition duties and receives a high-quality fungal diet from their workers.


The study was funded by a STENO grant from The Danish Council for Independent Research | Natural Sciences to Michael Poulsen, a Danish National Research Foundation Centre of Excellence grant (DNRF57) to Jacobus J. Boomsma, and a Marie Curie International Incoming Fellowship (300837) to Guojie Zhang.




Story Source:


The above story is based on materials provided by Faculty of Science - University of Copenhagen . Note: Materials may be edited for content and length.



Critically ill ICU patients lose almost all of their gut microbes and the ones left aren't good

Researchers at the University of Chicago have shown that after a long stay in the Intensive Care Unit (ICU) only a handful of pathogenic microbe species remain behind in patients' intestines. The team tested these remaining pathogens and discovered that some can become deadly when provoked by conditions that mimic the body's stress response to illness.



The findings, published in mBio®, the online open-access journal of the American Society for Microbiology, may lead to better monitoring and treatment of ICU patients who can develop a life-threatening systemic infection known as sepsis.


"I have watched patients die from sepsis -- it isn't their injuries or mechanical problems that are the problem," says John Alverdy, a gastrointestinal surgeon and one of two senior authors on the study.


"Our hypothesis has always been that the gut microflora in these patients are very abnormal, and these could be the culprits that lead to sepsis," he says.


The current study supports this idea. Alverdy and Olga Zaborina, a microbiologist, wanted to know what happens to the gut microbes of ICU patients, who receive repeated courses of multiple antibiotics to ward off infections.


They found that patients with stays longer than a month had only one to four types of microbes in their gut, as measured from fecal samples -- compared to about 40 different types found in healthy volunteers.


Four of these patients had gut microbe communities with just two members-- an infectious Candida yeast strain and a pathogenic bacterial strain, such as Enterococcus faecium or Staphylococcus aureus and other bugs associated with hospital-associated infections. Not surprisingly, almost all of the pathogenic bacteria in these patients were antibiotic resistant.


"They've got a lot of bad guys in there, but the presence of bad guys alone doesn't tell you who's going to live or die," says Alverdy. "It's not only which microbes are there, but how they behave when provoked by the harsh and hostile conditions of critical illness."


To check that behavior, the team cultured microbe communities from ICU patients and tested their ability to cause harm in a laboratory model of virulence. The tiny Caenorhabditis elegans worm normally feeds on soil microbes, but when fed pathogenic microbes in the lab, the worms act as a canary-in-the-coalmine indicator of virulence. The more virulent a microbe, the more worms it kills.


Feeding the worms the yeast-plus-bacteria communities did not kill many worms, but when the bacteria were removed, the yeast alone became deadly. In some cases, simply changing the bacterial partner caused virulence. This suggests that even though the two microbes in these communities are both pathogens, they exist in a communal balance in the gut that does not always lead to virulence.


"During host stress, these two microbes suppress the virulence of each other," says Zaborina. "But if you do something to one of them, then that can change their behavior."


For example, the team found that adding an opioid drug to the mix -- which mimics stress signals released by sick patients -- could also switch behavior from a peaceful coexistence called commensalism to virulence for some microbe pairs. The team could prevent this switch to virulence by feeding the worms a molecule that created high phosphate levels in their gut.


Although the study was too small for statistical significance, there was a correlation between microbe behavior and whether a patient lived or died: two patients who were discharged had microbes that coexisted peacefully, but the three who died of sepsis had at least one sample that displayed pathogenic behavior.


The work suggests that doctors should try to find ways to minimize the excessive use of antibiotics and stabilize the microbes that do remain in ICU patients' guts. This might be achieved by delivering phosphate or reducing the stress signals in the gut. Such efforts could keep microbes calm and non-virulent, leading to better patient outcomes.



Even the Tiniest Broken Part Can End an F1 Race Before It Begins


 Nico Rosberg of Germany and Mercedes GP is pushed back to the garage during the formation lap after experiencing problems before the Singapore Formula One Grand Prix at Marina Bay Street Circuit on September 21, 2014 in Singapore, Singapore.

Nico Rosberg of Germany and Mercedes GP is pushed back to the garage during the formation lap after experiencing problems before the Singapore Formula One Grand Prix at Marina Bay Street Circuit on September 21, 2014 in Singapore, Singapore. Lars Baron / Getty Images



To finish first, the old saying goes, one must first finish.


Nowhere is that racing truism more evident these days than at Mercedes F1, where the failure of a wiring harness knocked Nico Rosberg out of Sunday’s Singapore Grand Prix after the 13th lap. It was a crushing setback for the German, who had been leading the championship and now trails teammate Lewis Hamilton by three points with five races to go. More troubling, it was the fifth time a Mercedes driver has been saddled with a Did Not Finish (DNF) by mechanical issues, a reminder that a reliable car is just as important as an ace driver.


Nico’s problems seemed to come out of nowhere, at least for those of us not privy to the inner communications of the Mercedes F1 team. The car performed well in practice and qualifying, but had significant issues on race day. The first indication that something might be amiss occurred before Nico’s car first left the pit garage. The team had the car up on jacks and was running through gear changes with the wheels spinning, which is unusual to see just before a race. More issues popped up during a practice start when leaving the pit lane just a half-hour before the race began. Rosberg sat at the end of the pit lane for longer than usual, before laying down a big strip of rubber during a burnout, suggesting problems with the electronically controlled clutch. The team replaced the steering wheel and reset the computer systems (like with an everyday computer, a simple reset can work wonders), but no luck.


Sunday’s snafu was eventually traced to a faulty wiring harness connecting the steering wheel to the in-car computer. In your car, that might mean the “check engine” light comes on and you have to turn down the radio with the knob on the dash instead of the button on the wheel. But it can be disastrous in an F1 car, because almost everything is controlled from the steering wheel. Rosberg was able to drive the car, but couldn’t make changes to the engine, differential, clutch and other vital functions or even, for a time, communicate with his crew. The car was, in effect, unfit for racing.


With the clutch—activated using a paddle on the back of the steering wheel–inoperable, Rosberg failed to get away during the formation lap. That forced him to start the race from the pit lane, and only after the rest of the field had cleared the grid. Rosberg did the best he could without critical functions like the drag reduction system or hybrid power that would have allowed the wickedly fast Mercedes to cut through the field. “I was only able to change gear,” Rosberg says. Even shifting, another function done with paddles behind the wheel, proved difficult. The transmission would jump two gears at a time, leaving Rosberg with just first, third, fifth, and seventh gears.


Rosberg limped along for 13 laps, averaging more than 150 mph and keeping pace with the back of the field. But when it came time for his first pit stop, it went from bad to worse. He crept into the pit, where his crew changed all four tires, replaced the steering wheel, and attempted to get him going again. But with no clutch function to speak of, the car simply would not move, and Rosberg eventually called it a day.


Consistent Part Failures


Success in F1 requires consummate skill, of course, but it also requires a reliable car that performs consistently. A wiring loom is not a part anyone thinks much about. It’s not prone to failure like a transmission or engine full of moving parts. But it’s just as important, because when it fails, the car can’t race. And it’s not something that can be fixed mid-race by remotely restarting the computer, having the driver change a setting, or replacing a part during a pit stop.


Mercedes has been bitten by a range of part failures this season, coming from the engine (two DNF), the brakes (one DNF and two hampered races), the gearbox (one DNF), and now the wiring loom. We don’t have any additional information about what caused this latest issue or how it might be prevented, but Mercedes team boss Toto Wolff says his team wants the championship to be decided on the track, not by reliability problems. “That would obviously be something which would not be satisfying at all,” says Wolff. “We need to refocus and get our heads down, keep concentrating and find out what we can do.”


When it’s working, the Mercedes W05 is wickedly fast. In seven of the nine race in which both Rosberg and Hamilton finished, they took first and second place. But, because Daniel Ricciardo’s Red Bull car has been effectively bulletproof (he’s been forced to retire from just one race and that was because of human error during a pit stop), consistency has kept him in the fight. If Mercedes continues to have reliability problems through the remainder of the season, Ricciardo could snatch the championship at the last race in Abu Dhabi.



VESTIGIAL: Learn what it means! [Pharyngula]


Vestigial organs are relics, reduced in function or even completely losing a function. Finding a novel function, or an expanded secondary function, does not make such organs non-vestigial.


The appendix in humans, for instance, is a vestigial organ, despite all the insistence by creationists and less-informed scientists that finding expanded local elements of the immune system means it isn’t. An organ is vestigial if it is reduced in size or utility compared to homologous organs in other animals, and another piece of evidence is if it exhibits a wide range of variation that suggests that those differences have no selective component. That you can artificially reduce the size of an appendix by literally cutting it out, with no effect on the individual (other than that they survive a potentially acute and dangerous inflammation) tells us that these are vestigial.


I went through this whole ridiculous argument years ago, when the press seized upon an explanation of immune function in the appendix to suggest that a key indicator of evolution was false. It was total nonsense, that only refuted a straw version of creationism. I even cited Darwin himself to demonstrate the ignorance of the concept by the modern press.



An organ, serving for two purposes, may become rudimentary or utterly aborted for one, even the more important purpose, and remain perfectly efficient for the other. Thus in plants, the office of the pistil is to allow the pollen-tubes to reach the ovules within the ovarium. The pistil consists of a stigma supported on a style; but in some Compositae, the male florets, which of course cannot be fecundated, have a rudimentary pistil, for it is not crowned with a stigma; but the style remains well developed and is clothed in the usual manner with hairs, which serve to brush the pollen out of the surrounding and conjoined anthers. Again, an organ may become rudimentary for its proper purpose, and be used for a distinct one: in certain fishes the swimbladder seems to be rudimentary for its proper function of giving buoyancy, but has become converted into a nascent breathing organ or lung. Many similar instances could be given.



I’m dragging out Darwin because it’s happening again. An analysis of whale pelvic bones supposedly refutes the notion that they are vestigial, because they play a role in sex.



Both whales and dolphins have pelvic (hip) bones, evolutionary remnants from when their ancestors walked on land more than 40 million years ago. Common wisdom has long held that those bones are simply vestigial, slowly withering away like tailbones on humans.


New research from USC and the Natural History Museum of Los Angeles County (NHM) flies directly in the face of that assumption, finding that not only do those pelvic bones serve a purpose — but their size and possibly shape are influenced by the forces of sexual selection.


"Everyone’s always assumed that if you gave whales and dolphins a few more million years of evolution, the pelvic bones would disappear. But it appears that’s not the case," said Matthew Dean, assistant professor at the USC Dornsife College of Letters, Arts and Sciences, and co-corresponding author of a paper on the research that was published online by Evolution on Sept. 3.



dolphinpelvicbones


Of course the creationists are thrilled to pieces. The press loves the spin of evidence that ‘refutes’ evolution, but the creationists love it even more. After telling us that scientists keep changing the meaning of vestigial, they crow over this rebuke from one of the authors of the study:



This is not just our observation. The scientists who revealed the usefulness of whale hips are rethinking what it means to be vestigial. Or so it sounds from the remarks of biologist Matthew Dean at USC, a co-author of the paper in Evolution, commenting in Science Daily:



"Our research really changes the way we think about the evolution of whale pelvic bones in particular, but more generally about structures we call ‘vestigial.’ As a parallel, we are now learning that our appendix is actually quite important in several immune processes, not a functionally useless structure," Dean said.



Anyone who thinks whale hips are functionless, just like your appendix, should try telling that to a lonely gentleman whale. The career of this evolutionary icon isn’t over yet, I’m sure, but its importance in the evolutionary pantheon is due for a serious downgrade.



I’ve read the paper. It’s about a hypothesis that sexual selection may be maintaining the pelvic bones; it discusses purely evolutionary hypotheses at length, and is not a paper to support intelligent design. Yet here one of the authors is parroting common misconceptions about vestigial organs, and that is infuriating: if you’re going to write papers about a subject, you should know your background adequately. No, finding a retained secondary function for an organ does not mean you have to rethink vestigial organs. See Darwin again.



An organ, serving for two purposes, may become rudimentary or utterly aborted for one, even the more important purpose, and remain perfectly efficient for the other.



It turns out that I also have those same muscles, attached to my pelvis, and I can actually wiggle my penis at will (I hope I don’t have to demonstrate this; any human male will do as a demo, just ask, or try it yourself.) I also use muscles attached to my pelvis to…walk. Whales have reduced, vestigial pelvic bones that have lost the functions needed for walking, but have retained a function for wiggling their penises. This is not a surprise; it is not a revelation that changes our understanding of evolution; I would not get a prize if I showed at the yearly Evolution meeting, dropped trou, and demonstrated my skills with a tassle.


Fools of the Discovery Institute to the contrary, whale pelves are still excellent examples of vestigial organs, and haven’t been ‘downgraded’ at all. And if the IDiots want to argue that we’ve been jiggering the definition to match circumstances, I’ll just point them to that Darwin quote again. Can’t get much more basic and original than that.



An organ, serving for two purposes, may become rudimentary or utterly aborted for one, even the more important purpose, and remain perfectly efficient for the other.



I will say that I wish investigators in evolutionary biology had a better grounding in elementary evolutionary theory, so they would stop inventing these imaginary conflicts to puff up their work.




Let me just add, the paper is fine — it covers the specific topic of its title, Sexual selection targets cetacean pelvic bones, perfectly well. It’s these off-the-cuff remarks to the press that reflect an embarrassing ignorance.



Facebook Lays Out Its Roadmap for Creating Internet-Connected Drones


A still from an promotional video showing how Facebook and Internet.org will deliver internet access via aerial drone.

A still from an promotional video showing how Facebook and Internet.org will deliver internet access via

aerial drone. Internet.org



If companies like Facebook and Google have their way, everyone in the world will have access to the internet within the next few decades. But while these tech giants seem to have all the money, expertise, and resolve they need to accomplish that goal—vowing to offer internet connections via things like high-altitude balloons and flying drones—Yael Maguire makes one thing clear: it’s going to be a bumpy ride.


“We’re going to have to push the edge of solar technology, battery technology, composite technology,” Maguire, the engineering director of Facebook’s new Connectivity Lab, said on Monday during a talk at the Social Good Summit in New York City, referring to the lab’s work on drones. “There are a whole bunch of challenges.”


Facebook formed its Connectivity Lab earlier this year. Dovetailing with CEO Mark Zuckerberg’s non-profit group, Internet.org, its goal is to build and launch a fleet of solar-powered drones that can connect the billions of people currently living off the grid to the internet. It arrived just a month before Google agreed to acquire Titan Aerospace, a startup that makes its own solar-powered drones, and according to Maguire, such projects are a long way from success. There are substantial operational, technical, and regulatory hurdles these companies with have to overcome before any of their technologies can, well, take flight.


‘We’re going to have to push the edge of solar technology, battery technology, composite technology.’


In order to fly its drones for months or years at a time, as it would have to do in order to provide consistent connectivity, Maguire explained, Facebook’s drones will have to fly “above weather, above all airspace,” which is anywhere from 60,000 to 90,000 feet in the air. That puts these drones on tricky regulatory footing, since there are essentially no regulations on aircraft that fly above 60,000 feet in the air. “All the rules exist for satellites, and we’re invested in those. They play a very useful role, but we also have to help pave new ground,” Maguire said.


Facebook and its counterparts will also have to find a way around regulations dictating that there must be one human operator to every drone, which could drastically limit the potential of such an innovation to scale. For proof, Maguire pointed to a recent solar drone demonstration by a British company, which ended after two weeks to give the pilots a break. “It’s like playing a videogame for two weeks straight with no rest,” he said. “We need a regulatory environment that will be open to one pilot perhaps managing 10 or 100 drones. We have to figure these things out.”


Still, despite the obstacles, Maguire said he expects the Connectivity Lab’s drones will be ready to begin testing by next year. Just where that will be Maguire can’t say. The company has also identified 21 locations in Latin America, Asia, and Africa where it would like to deploy its connectivity projects, which Maguire says are likely two to five years away. But Facebook will not be running these projects itself, Maguire warns. It’s actively looking for partners on the ground—be they governments, communities, or local businesses—who will deploy the technology the Connectivity Lab has created.


“We’re hoping we’ll be able make this technology open for other people to use…because we think they have a more scalable model for getting the technology out there,” he says. “It’s going to be an enormous effort. Trying to connect everyone is the problem of our generation.”



New App Aims to Make Trading Stocks As Easy As Posting Selfies


Robinhood co-founders Vlad Tenev (left) and Baiju Bhatt.

Robinhood co-founders Vlad Tenev (left) and Baiju Bhatt. Robinhood



Standing in line for coffee may seem like an awkward time to trade stocks. But for the makers of the new app Robinhood, those casual moments are exactly when they want to reach a new generation of potential investors who might otherwise feel the markets are closed to them.


Vlad Tenev and Baiju Bhatt, as many startup founders do, met and Stanford, and both spent time in the financial services industry before joining up to figure out what a stock brokerage built exclusively for mobile devices would look like. The issue, they say, isn’t just convenince: it’s access. “The fact that a lot of people, especially younger folks, are not investing in the stock market is something we really think needs addressing,” Bhatt says.


The pair pitch Robinhood as a 21st-century alternative to traditional, stuffy brokerage firms that still rely on (gasp!) websites for their version of electronic trading. Trading stocks, Tenev says, should be as easy as summoning a ride on Uber or posting a picture to Instagram. Along with going mobile-first, the main way Tenev and Bhatt hope to set themselves apart is by not charging fees for trades.


robinhood

Robinhood



At a time when executing a stock trade has become a purely automated process, Tenev says, charging a typical $7 to $10 commission makes as much sense as charging to send an email. “The days when humans passed around tickets on a trading floor are long gone,” he says.


Tenev claims that traditional brokerages depend on that extra revenue to prop up legacy infrastructure and the trappings of old Wall Street—fancy offices and logos engraved in marble. However much that branding really still persists, he believes that its lasting impact is to make young people feel like the stock market is a resource available only to people with a lot of money.


He and Bhatt like to describe the stock market as a “tool,” and that even someone who just wants to trade with a few hundred dollars should be able to jump in to get a feel for the market and how it works. They’re not likely to take that leap, Bhatt says, if they’re charged $10 per trade, which winds up becoming a significant percentage. “We see it as something you don’t need even thousands of dollars for,” he says.


A Calming Effect


The message seems to be reaching the intended audience. Tenev says that of 500,000 people who have signed up for the Robinhood waitlist, 80 percent are between the ages of 18 and 29. (Tenev and Bhatt hope to release Robinhood widely by early next year.)


And another kind of investor really likes those numbers. On Tuesday, Robinhood announced $13 million in Series A funding led by financial services aficionado Jan Hammer of Index Ventures. Box CEO Aaron Levie is another among a varied group of investors in the round, which also includes Snoop Dogg and Oscar-winner Jared Leto.


As to the name of the company, Tenev and Bhatt say they’re taking a system seen as closed off and only for the rich and making it available to everyone else. At their most idealistic, they even believe that at scale, a critical mass of smalltime individual investors can wrestle the market back from institutional titans.


Big-time traders can afford to massively leverage themselves in pursuit of short-term gains, Bhatt says, which makes markets volatile. Value-minded individual investors can work as a resistant force to those spikes and troughs, he says, especially if it becomes easier for them to get into the market in the first place: “You may see that one of the things that emerges is that markets have less volitility because a greater number of individuals start to participate.”



Arizona’s Law Against Revenge Porn: Nice Try, But It Makes All Nudes Illegal


nude

Josh Valcarcel



If you shared or re-published any of the images of nude celebrities that leaked online earlier this month, you could be charged with a felony under a new Arizona law.


You could also be charged for sharing an image of a woman’s unclothed breast while she’s feeding her infant, or for publishing images of the stripped Abu Ghraib prisoners piled inhumanely in a pyramid. There’s even potential risk from publishing the latest nip-slip image of a celebrity suffering a wardrobe malfunction or caught in an ungraceful sprawl while exiting a taxi sans underwear.


That’s essentially the argument in a lawsuit filed in Arizona today by the American Civil Liberties Union on behalf of a coalition of media groups, librarians and others who are challenging the law on the constitutional grounds that it restricts free speech and freedom of the press protections guaranteed under the First Amendment. They also say it violates the Fifth and Fourteenth Amendments, and is “unconstitutionally vague” and “vastly overbroad” in reach.


“States can address malicious invasions of privacy without treading on free speech, with laws that are carefully tailored to address real harms. Arizona’s is not,” said Lee Rowland, staff attorney for the ACLU, in a statement.


Arizona’s “anti-revenge porn” law, which went into effect in July, was passed with good intentions. Like a dozen other states that have passed similar laws since 2013, Arizona hoped to address the disturbing trend in which embittered lovers distribute nude images of ex-spouses and paramours in an effort to humiliate or cause professional or personal harm.


But unlike other states, which have narrowly restricted the reach of their laws or made the offense a misdemeanor instead of a felony, Arizona’s law is so poorly written it affects just about anyone who shares or publishes any nude image without explicit consent.


Although Arizona’s law is particularly draconian, Rowland says any law that criminalizes revenge porn is actually problematic.


“As a general matter, we don’t criminalize gossip or other truthful but embarrassing information about people that we have relationships with,” Rowland told WIRED. “Revenge porn does create acute harm for its victims, who are predominantly women, and I think it’s valuable for lawmakers to have a conversation about how to offer victims of revenge porn relief. But there are many solutions, including civil law, that don’t create an acute chill on protected speech like Arizona’s law does.”


The law makes it criminal to disclose, display, publish, or advertise any images of a person who is “in a state of nudity or engaged in specific sexual activities” if the person who shares or publishes the images “knows or should have known” that the person depicted in the image did not consent to “the disclosure.”


“State of nudity” is defined as the “appearance of a human anus, genitals or a female breast below a point immediately above the top of the areola” or a “state of dress that fails to opaquely cover a human anus, genitals or a female breast below a point immediately above the top of the areola.” Recent images of Rihanna in a see-through Swarovski-crystal gown could conceivably fall into this category. Although the law provides an exception for images “involving voluntary exposure in a public or commercial setting,” it provides no definition of the terms.


The law places newspapers, photographers, librarians, art galleries, booksellers, and publishers at risk of committing a felony if they fail to obtain prior consent from anyone who appears in nude images they publish or distribute, whether the image is used for artistic purposes or for its news value.


This means you too could be in violation for simply sharing someone’s nude selfie or the cute nude infant-on-a-rug image that your friend posted on Facebook if you don’t obtain prior consent to share the pic. And situations that might otherwise fall under copyright and fair use laws could turn criminal if a blogger, for example, republished a nude “selfie” that someone self-posted on a widely-accessible social networking account without permission. The law doesn’t require that the person in the picture have an expectation of privacy or that the person even be recognizable in the photograph.


Artist Betsy Schneider's images of her son, titled "Victor Block 4" is an example of the type of images that could fall under Arizona's law. Image courtesy of Betsy Schneider.


Under the law, a woman who receives an unsolicited image of a man’s penis could also be convicted of a felony if she shares the photograph with anyone else. Even someone who publishes images of naked and dead Holocaust victims could be at risk, since the law provides no exception in circumstances where consent can no longer be obtained because the person has died.


Even when consent exists, it only applies to the initial publication or distribution of the image. Thereafter, anyone else who publishes or distributes the image must also obtain consent.


An individual who poses for a nude photo and gives consent to a commercial photographer for its distribution but later revokes that consent—either because he or she regrets the agreement or feels that compensation for the picture was inadequate—could make it a felony for anyone to further exhibit the image.


“If another person could unilaterally prevent you from sharing an image you initially had every right to share, that places a lot of power in a private party’s hands,” Rowland says.


The law does not establish when Arizona has jurisdiction to prosecute—for example, whether the person depicted in the image must be an Arizona citizen, whether the image was shared or published by someone who lives in Arizona or whether it’s enough that an image was published on a web site that was viewed by residents of Arizona.


The law also doesn’t specifically require that someone have malicious intent in sharing or publishing an image or that a person in the image is actually harmed by sharing the picture. This means, essentially, that “anyone who shares an image, even if their doing so is purely innocent and they have no reason to think the person pictured would object, is the same as a knowing invasion of privacy under the statute,” Rowland said.


The consequences for violating the law are steep. Not only is it a felony punishable by up to three years and nine months in prison, it’s also potentially a sex crime, requiring someone convicted of the law to register as a sex offender. In other words, “it has the potential of life-ruining consequences,” Rowland said.


Sex offense registration laws apply if a judge finds there is a sexual motivation behind a crime. Given that the Arizona nude photo law that is categorized by the state under a list of sex offenses “it is presumably a sex offense,” said Rowland. “And if a judge finds that any motivation behind a person who shares an image was sexual, they would be subject to the sex offender registration requirement. It certainly is not hard to imagine that someone who shares [a hacked image of a nude celebrity] because they think the person is beautiful could qualify as sexual motivation under this law.”


And it’s not farfetched to think that Arizona authorities would wield the law in a broad manner.


In 2008, authorities in Maricopa County, Arizona, considered launching a child porn investigation into the Phoenix New Times and its publisher for posting a series of images taken by Arizona artist and university professor Betsy Schneider of her own children. The images, from an exhibition of her work, included depictions of her naked children when they were infants. A Phoenix city attorney said at the time that if the photos were found to be illegal, “Everybody who picked up one those issues (of the New Times) could be prosecuted for possessing child pornography.”



‘Big Bang Signal’ Could All Be Dust


The BICEP2 telescope detected a swirl pattern resembling an expected signal from the Big Bang, but was it all cosmic dust? Click here for the full-size graphic.

The BICEP2 telescope detected a swirl pattern resembling an expected signal from the Big Bang, but was it all cosmic dust? Click here for the full-size graphic. Olena Shmahalo/Quanta Magazine



There was little need, before, to know exactly how much dust peppers outer space, far from the plane of the Milky Way. Scientists understood that the dimly radiating grains aligned with our galaxy’s magnetic field and that the field’s twists and turns gave a subtle swirl to the dust glow. But those swirls were too faint to see. Only since March, when researchers claimed to have glimpsed the edge of space and time with a fantastically sensitive telescope, has the dust demanded a reckoning. For, like a cuckoo egg masquerading in a warbler’s nest, its pattern mimics a predicted signal from the Big Bang.


Now, scientists have shown that the swirl pattern touted as evidence of primordial gravitational waves — ripples in space and time dating to the universe’s explosive birth — could instead all come from magnetically aligned dust. A new analysis of data from the Planck space telescope has concluded that the tiny silicate and carbonate particles spewed into interstellar space by dying stars could account for as much as 100 percent of the signal detected by the BICEP2 telescope and announced to great fanfare this spring.


The Planck analysis is “relatively definitive in that we can’t exclude that the entirety of our signal is from dust,” said Brian Keating, an astrophysicist at the University of California, San Diego, and a member of the BICEP2 collaboration.


“We were, of course, disappointed,” said Planck team member Jonathan Aumont of the Université Paris-Sud.


The new dust analysis leaves open the possibility that part of the BICEP2 signal comes from primordial gravitational waves, which are the long-sought fingerprints of a leading Big Bang theory called “inflation.” If the universe began with this brief period of exponential expansion, as the cosmologist Alan Guth proposed in 1980, then quantum-size ripples would have stretched into huge, permanent undulations in the fabric of the universe. These gravitational waves would have stamped a swirl pattern, called “B-mode” polarization, in the cosmic microwave background, the oldest light now detectable in the sky.


But beware the cuckoo.


At a much-publicized March 17 news conference, BICEP2 team leader John Kovac of Harvard University announced that the group’s South Pole-based telescope had found evidence of B-modes that “matched very closely the predicted pattern” of primordial gravitational waves. After probing a region of space far from the dusty plane of the galaxy — “the cleanest patch of sky we can train our telescope on,” Kovac said — and measuring the polarization of incoming microwaves with 12 times the sensitivity of any previous experiment, he and his colleagues were convinced that they had detected proof of inflation.


Bicep2_News_Conference_web-640x430

Elise Amendola/Associated Press



But in the months following the announcement, outside experts cried foul, arguing that the scientists had used highly uncertain models of galactic dust emission that now appear to have underestimated dust contamination in the BICEP2 region. “The community as a whole underestimated it,” said Avi Loeb, a theorist at Harvard who is not affiliated with BICEP2.


Had the BICEP2 telescope been capable of detecting B-modes at multiple microwave frequencies, the scientists could easily have distinguished between light from interstellar dust grains and the more ancient light they sought. Both light sources become brighter at higher frequencies, but dust emissions brightens more dramatically. By plotting the strength of the B-mode signal as a function of frequency, the scientists could have determined whether the curve resembled the shallow rise of the cosmic microwave background or the steeper rise of dust light.


Instead, the team opted for maximum sensitivity and designed their detectors to receive a single frequency: 150 gigahertz. “This was the Achilles’ heel of the experiment,” Loeb said.


With higher frequencies swamped by dust emission and lower frequencies by another “foreground” called synchrotron radiation, 150 gigahertz sat at a sweet spot with minimal contamination. But a single data point can lie on any curve.


Unable to directly determine the fraction of their signal that came from dust, the scientists relied on existing models of contamination in their patch of the sky — including data incorrectly extracted from a preliminary dust map in a Planck scientist’s PowerPoint slide — and concluded that dust could account for no more than one-fifth of their signal. After a group led by Raphael Flauger, now of Carnegie Mellon University, pointed out errors and the Planck team released better (though still preliminary) dust estimates, Kovac and his team revised their paper and hedged on their claim of a major discovery.


“They should have been much more cautious in their initial presentation,” said Lyman Page, a cosmologist at Princeton University. “They should not have claimed measuring a primordial B-mode because the uncertainty on foregrounds is and was simply too large.”


Multiple frequencies were needed. From 2009 to 2013, the telescope on board the European Space Agency’s Planck spacecraft measured polarization throughout the sky at seven different microwave frequencies, though in any given patch roughly 100 times less sensitively than BICEP2. In their new analysis, Planck scientists partitioned the sky into patches the size of the BICEP2 observation region and calculated the amount of B-mode polarization present in each patch at 353 gigahertz, a high frequency where dust emission dominates the signal. Some of the other patches gave off only half as much dust light as the BICEP2 patch, making it, in Keating’s words, “not squeaky clean.”


Planck’s full-sky map showing the projected dust contamination at 150 GHz, extrapolated from the 353 GHz data, with the cleanest regions shown in blue and the dustiest in red. The northern galactic hemisphere appears at left and the southern hemisphere at right, with a black contour outlining the approximate BICEP2 observation region.

Planck’s full-sky map showing the projected dust contamination at 150 GHz, extrapolated from the 353 GHz data, with the cleanest regions shown in blue and the dustiest in red. The northern galactic hemisphere appears at left and the southern hemisphere at right, with a black contour outlining the approximate BICEP2 observation region. Planck collaboration



The Planck telescope lacked the sensitivity to detect the faint B-modes at 150 gigahertz as seen by BICEP2, but by knowing roughly how dust emission varies with frequency, the scientists extrapolated down from its value at 353 gigahertz. They calculated that excess dust emission would produce B-mode polarization as strong as the signal detected by BICEP2, give or take roughly one-third of that strength.


“They more or less assumed that they could find a piece of the sky with low dust emission,” said Douglas Scott, a cosmologist at the University of British Columbia who was heavily involved in the new analysis. “And the Planck result shows there is no part of the sky where you can ignore the dust.”


Exactly how much of the total B-mode polarization comes from primordial gravitational waves, if any, will be a matter of intense ongoing analysis. If there is a primordial signal, its strength, quantified by a parameter called r, will reveal the amount of energy that infused space-time and drove it apart during inflation. The energy scale of inflation would be a major clue as to why it happened.


“I can’t overemphasize how interesting this is,” Stanford University inflationary theorist Eva Silverstein said of the possible values of r during a recent talk in Chicago. Theorists like Silverstein most want to know whether r is greater or less than 0.01, the crossover point between categories called large-field and small-field inflation. The former would reveal details of an all-encompassing theory of quantum gravity.


While house dust is mostly lint or dead cells, cosmic dust is typically composed of carbon, silicates and other minerals. This grain of interplanetary dust caught by a high-flying NASA aircraft measures only 10 microns across, or one-tenth the width of a human hair.

While house dust is mostly lint or dead cells, cosmic dust is typically composed of carbon, silicates and other minerals. This grain of interplanetary dust caught by a high-flying NASA aircraft measures only 10 microns across, or one-tenth the width of a human hair. NASA



Whereas the initial BICEP2 analysis pegged r at 0.2, corresponding to certain large-field inflationary models, the Planck study lowers its value much closer to 0. If the waves are detectable at all, a much more powerful telescope than BICEP2 will be needed to perceive them behind the swirly haze of galactic dust. Already, at least 10 existing or planned experiments have sufficient sensitivity to detect B-modes weaker than r = 0.1. The Atacama Cosmology Telescope, South Pole Telescope, and the combined BICEP/Keck Array all should be capable of measuring B-modes from gravitational waves within two to three years if the signal is larger than r = 0.01. A balloon-borne instrument called SPIDER will eventually achieve similar sensitivity.


To critics of the inflation idea, the heightened sensitivity of these experiments may be of little consolation. The theory is flexible enough to survive even if no primordial B-modes are found, making it virtually impossible to falsify.


“There are many models with r so small that you just wouldn’t see it with these experiments,” said Flauger, who helped develop a testable string theory model of inflation, with r = 0.07, with Silverstein and others.


Inflation will remain the leading Big Bang theory even if the entire BICEP2 signal fades to dust, said Mark Trodden, a professor of physics at the University of Pennsylvania. It explains the smoothness and uniformity of the universe and gives a mechanism for structure formation, he explained — “but all this evidence is highly circumstantial.”


Confirmation of primordial gravitational waves would have locked the theory down, resolving once and for all the picture of the beginning of time. Now, “the jury is still out,” Keating said.


A joint analysis of data from Planck and BICEP2, which is expected to appear in November, could determine whether any primordial B-modes are mixed with the dust swirls in that clean, but not squeaky clean, patch of sky above the South Pole. By collaborating, Kovac said, the teams should be able to put a new upper limit on the value of r — an assurance that primordial gravitational waves must be weaker than a certain strength, if they exist at all — that “relies on data, not models” of dust contamination.


“I can promise that we are approaching the analysis with a completely unbiased attitude,” Kovac said. “We are as eager as everyone else to see the uncertainties reduced here, whatever the final answer.”


The success of BICEP2, Loeb said, was in increasing the sensitivity by an order of magnitude compared to previous experiments. “Definitely they detected something,” he said. “The significance of that depends on what the interpretation is. If it’s dust, it has no cosmological significance whatsoever.”



Samsung Woos Action Photogs With a Blazing-Fast Mirrorless Camera


samsung-nx1-inline

Samung



There are “camera companies,” and there are “consumer-electronics companies.” Samsung is generally considered the latter, even though it’s made some very good interchangeable-lens cameras over the past few years. It’s just that the company makes so many other things, too.


Therein lies a challenge. For serious photographers, it helps to be thought of as a “camera company.” That’s not just for superficial reasons, as the range of lenses and accessories tends to be much more robust for the Canons and Nikons of the world. There’s also a sense that you know exactly what you’re getting if a company is focused on cameras and cameras only.


What you appear to be getting with Samsung’s new NX1 is perhaps the most powerful midrange DSLR-style camera on the market. This is a mirrorless camera, although it has a body style that looks more like a DSLR. At its core is an absurdly high-resolution 28.2-megapixel APS-C sensor, along with an image processor that will be able to pull off some incredible feats.


Among them: The ability to shoot 15fps at full resolution with continuous autofocus enabled. That autofocus system also appears to be the brawniest in history, too, with 205 phase-detection sensors and 209 contrast-detection points. Not only will it be fast, it’ll fill the entire span of the sensor with focus zones.


Like a few other recent cameras, this one also shoots 4K video. True 4K, too: There’s a 4,096 x 2,160 capture mode at 24fps that records MP4 video using the H.265 codec. There’s also an Ultra HD (3,840 x 2,160) capture mode that records at 30fps, as well as your now-standard 1080p recording at 60, 30, and 24fps.


To frame everything up, there’s both an adjustable 3-inch OLED touchscreen and an eye-level OLED peephole. All the other specs generally match what’s out there from the “camera companies” in its price range: ISO settings that expand up to 51,200, full manual controls and RAW shooting, built-in Wi-Fi and NFC, and shutter speeds that crank up to 1/8000 of a second.


Starting in October, the flagship NX1 will be available in a body-only version for $1,500. An extended kit package with a 16-50mm/F2-F2.8 lens, an extra battery, and a battery grip will retail for $2,800. Sold separately, there’s also a new telephoto zoom lens that should pair well with the NX1’s fast-shooting ways for sports and wildlife photographers. That 50-150mm/F2.8 constant-aperture zoom lens will sell for $1,600, and the NX system has a focal length multiplier of 1.5X.



Ex-Tesla and NASA Engineers Make a Light Bulb That’s Smarter Than You


STACK02 copy

Stack



Sometime in early 2013, one of the supply chain engineers at Tesla leaned back in his chair and took a look around the Silicon Valley office. “It was a sunny day, and I looked up and I thought, ‘Why are these lights on with full power, when full sunlight is coming through the window?’” says Neil Joseph. An online search for a better, responsive bulb only yielded a few expensive commercial products. That October, Joseph (who says even as a kid, his two fascinations were lights and cars) left Tesla to start his own lighting company.


design_disrupt


The company is Stack, and its first product is Alba (alba is Italian for ‘sunrise’). The Alba bulbs are designed to work autonomously, both by adjusting light output based on sunlight and by learning and adapting to its owners’ household habits. As Joseph sees it, it’s the first in a new wave of lighting products to follow the Philips Hue and the LIFX. Those bulbs are smarter than the usual drugstore variety—they can sync with a smartphone app, and even keep rhythm with a song—but they aren’t intelligent by the same standards as the Nest Thermostat or even a tool like Google Maps. In short: They’re connected, but not responsive.


More Than a Gimmick


Embedded in Alba’s light diodes are sensors for motion, occupancy, and ambient light. This meant cofounder Jovi Gacusan, who worked on sensors at NASA, had to create a new core technology, because in order to work efficiently Alba has to both read and react to available light. “If you think about noise-canceling headphones, we have to cancel out the light being emitted by the light itself to understand how much light is in the light source,” Joseph says. The result is a light bulb that can see-saw with natural light and in doing so uses 60 to 80 percent less energy than a regular LED bulb.


STACK05 copy copy

Stack



Alba’s other main function involves tailoring itself around its owners. Like the Nest, Alba runs algorithms and remembers user habits at home. “If we notice that people are in a certain part of the house, at certain times of day, and then they mosey on over to a bedroom, and then they spend more time awake in the bedroom before they go to bed we can start to light a pathway,” Joseph says. Alba emits light with blue tones in the morning, to help users become alert, and then glows warmer shades of white as the day wears on. Users can adjust all this in the Stack app, and create profiles (‘dinner party,’ ‘nap time,’ and so on), and Alba will fold that data into its learning curve too.


It’s fairly easy to see how Stack, by riding the light bulb’s coattails, could quickly become the spine of the connected home. Everyone has to buy light bulbs, and many of them. And because each Alba bulb contains a Bluetooth module that acts like iBeacon technology, the hardware is already poised to start talking to other smart gadgets.


For now, it’s being used to track users and their movement patterns around the house. (This feature has a dystopian ring to it. Joseph says, “We don’t track any personal data, or anything that’s on an individual user. It’s environmental and used for learning and product performance.”) Down the line, though, Joseph has ambitions of partnering with other companies, from “thermostats to smart beds that track how people sleep,” to eventually build a home that’s goes beyond convenience, and is actually healthier. “The more data we have, we can see if you’re in a REM cycle, and then know not to wake you.”


Alba is sold as a starter kit with two bulbs, for $150.



Why Getting It Wrong Is the Future of Design


Degas was engaged in a strategy that has shown up periodically for centuries across every artistic and creative field. Think of it as one step in a cycle: In the early stages, practitioners dedicate themselves to inventing and improving the rules—how to craft the most pleasing chord progression, the perfectly proportioned building, the most precisely rendered amalgamation of rhyme and meter. Over time, those rules become laws, and artists and designers dedicate themselves to excelling within these agreed-upon parameters, creating work of unparalleled refinement and sophistication—the Pantheon, the Sistine Chapel, the Goldberg Variations. But once a certain maturity has been reached, someone comes along who decides to take a different route. Instead of trying to create an ever more polished and perfect artifact, this rebel actively seeks out imperfection—sticking a pole in the middle of his painting, intentionally adding grungy feedback to a guitar solo, deliberately photographing unpleasant subjects. Eventually some of these creative breakthroughs end up becoming the foundation of a new set of aesthetic rules, and the cycle begins again.


Degas wasn't just thinking outside the box. He was purposely creating something that wasn't pleasing.


For the past 30 years, the field of technology design has been working its way through the first two stages of this cycle, an industry-wide march toward more seamless experiences, more delightful products, more leverage over the world around us. Look at our computers: beige and boxy desktop machines gave way to bright and colorful iMacs, which gave way to sleek and sexy laptops, which gave way to addictively touchable smartphones. It's hard not to look back at this timeline and see it as a great story of human progress, a joint effort to experiment and learn and figure out the path toward a more refined and universally pleasing design.


All of this has resulted in a world where beautifully constructed tech is more powerful and more accessible than ever before. It is also more consistent. That's why all smartphones now look basically the same—gleaming black glass with handsomely cambered edges. Google, Apple, and Microsoft all use clean, sans-serif typefaces in their respective software. After years of experimentation, we have figured out what people like and settled on some rules.


But there's a downside to all this consensus—it can get boring. From smartphones to operating systems to web page design, it can start to feel like the truly transformational moments have come and gone, replaced by incremental updates that make our devices and interactions faster and better.


This brings us to an important and exciting moment in the design of our technologies. We have figured out the rules of creating sleek sophistication. We know, more or less, how to get it right. Now, we need a shift in perspective that allows us to move forward. We need a pole right through a horse's head. We need to enter the third stage of this cycle. It's time to stop figuring out how to do things the right way, and start getting it wrong.



In late 2006, when I was creative director here at WIRED, I was working on the design of a cover featuring John Hodgman. We were far along in the process—Hodgman was styled and photographed, the cover lines written, our fonts selected, the layout firmed up. I had been aiming for a timeless design with a handsome monochromatic color palette, a cover that evoked a 1960s jet-set vibe. When I presented my finished design, WIRED's editor at the time, Chris Anderson, complained that the cover was too drab. He uttered the prescriptive phrase all graphic designers hate hearing: “Can't you just add more colors?”


I demurred. I felt the cover was absolutely perfect. But Chris did not, and so, in a spasm of designerly “fuck you,” I drew a small rectangle into my design, a little stripe coming off from the left side of the page, rudely breaking my pristine geometries. As if that weren't enough, I filled it with the ugliest hue I could find: neon orange— Pantone 811, to be precise. My perfect cover was now ruined!


By the time I came to my senses a couple of weeks later, it was too late. The cover had already been sent to the printer. My anger morphed into regret. To the untrained eye, that little box might not seem so offensive, but I felt that I had betrayed one of the most crucial lessons I learned in design school—that every graphic element should serve a recognizable function. This stray dash of color was careless at best, a postmodernist deviation with no real purpose or value. It confused my colleagues and detracted from the cover's clarity, unnecessarily making the reader more conscious of the design.



But you know what? I actually came to like that crass little neon orange bar. I ended up including a version of it on the next month's cover, and again the month after that. It added something, even though I couldn't explain what it was. I began referring to this idea—intentionally making “bad” design choices—as Wrong Theory, and I started applying it in little ways to all of WIRED's pages. Pictures that were supposed to run large, I made small. Where type was supposed to run around graphics, I overlapped the two. Headlines are supposed to come at the beginning of stories? I put them at the end. I would even force our designers to ruin each other's “perfect” layouts.


At the time, this represented a major creative breakthrough for me—the idea that intentional wrongness could yield strangely pleasing results. Of course I was familiar with the idea of rule-breaking innovation—that each generation reacts against the one that came before it, starting revolutions, turning its back on tired conventions. But this was different. I wasn't just throwing out the rulebook and starting from scratch. I was following the rules, then selectively breaking one or two for maximum impact.



Once I realized what I'd stumbled on, I started to see it everywhere, a strategy used by trained artists who make the decision to do something deliberately wrong. Whether it's a small detail, like David Fincher swapping a letter for a number in the title of the movie Se7en , or a seismic shift, like Miles Davis intentionally seeking out the “wrong notes” and then trying to work his way back, none of these artists simply ignored the rules or refused to take the time to learn them in the first place. No, you need to know the rules, really master their nuance and application, before you can break them. That's why Hunter Thompson could be a great gonzo journalist while so many of his followers and imitators—who never mastered the art of traditional reporting and writing that underlay Thompson's radical style—suffer in comparison.



Why does the Wrong Theory work? After all, symmetry is naturally pleasing. Put two faces in front of a 1-year-old and she will immediately pick the more symmetrical one. But what if we're after something deeper than simple pleasure? It turns out that, while we might initially prefer the symmetrical and seamless, we are more challenged and invested in the imperfect. Think of Cindy Crawford's mole or Joaquin Phoenix's scar. Both people are stunning, but they stand out for their so-called imperfections. A better thought experiment might be to put that child in a room with 99 symmetrical faces and one asymmetrical one. Which one do you think she'll be drawn to?


A 2001 study conducted by Baylor College of Medicine and Emory University might begin to answer that question. In it, neuroscientists conducted fMRI scans on 25 adults who received squirts of fruit juice or water into their mouths in either predictable or unpredictable patterns. The scans showed that the subjects who got the unpredictable sequence registered noticeably more activity in the nucleus accumbens—an area of the brain that processes pleasure.


Yes, our minds learn to prefer activities that we repeatedly enjoy, because we recognize those patterns and come to expect a payoff. But the study suggests that when our predictions are wrong—when we walk into a surprise party instead of a planned dinner, for instance—that's when our pleasure centers really light up. We may find comfort in what we know we like, but it's the aberrations that bring us to attention.


How might these findings be applied to technology design? It's still a bit early to say. Right now we are late in the second stage of the design cycle—applying agreed-upon rules to an ever-widening array of products, apps, sites, and services. Put another way, designers are still trying to get things right, not deliberately make them wrong. But even as they do so, they are learning how to push up against once-sacrosanct conventions. As a result, they're giving us glimpses of what “wrong” technology might look like.



Take Instagram. When Kevin Systrom and Mike Krieger were first developing the photo-sharing social network, they wrote a sentence on a whiteboard that summed up the accepted wisdom around photo sharing: “Today online, people post photos that they take with cameras, and they store them in albums to share with only their friends.” Then, systematically, they began replacing words. Cameras became phones, in albums became as single photos, only their friends became everyone. In the process, they stumbled upon an innovative insight about how people's behavior would change. This isn't really an example of Wrong Theory—the result was incredibly appealing, not intentionally off-putting. But the method they used to create it, understanding and then subverting explicit established rules, suggests the kind of thinking that can move us into this new era.


Indeed, we're starting to see that kind of thinking everywhere. Snapchat built a multibillion-dollar empire on a notion that seems deeply wrong at first blush—actively preventing users from archiving and accessing their communication. And Netflix undercut the entire structure of television by deciding to release every episode of its original series at once. That meant trading off some of the pleasure of the weekly cliffhanger and the day-after watercooler chatter for more complicated plotlines—like the maybe-too-byzantine Arrested Development reboot—and the joys of binge-watching. Or take a look at the growing subgenre of intentionally frustrating videogames—like Flappy Bird or Super Hexagon—that ignore standard on-ramping and throw players directly into chaos.


All of these examples point the way toward the next challenge for technology design. What happens after you've learned how to make technology that is supremely appealing and functional? A whole new range of opportunities opens up. By breaking those rules, we can create technology that is more than merely useful or beautiful or natural. We can imagine technology that is complicated and personal—nostalgic, funny, self-deprecating, abrasive. Yes, there will be missteps. For every Kind of Blue there were about a million Metal Machine Musics—unlistenable exercises in self-indulgence. But only by courting failure can we find new ways forward. It's time for us to create the next wave of technology. It's time for us to be wrong.



Throughout history, artists and innovators have advanced their fields by making deliberately “wrong” choices. Here are some great moments in Wrong Theory.

—CORY PERKINS



1903 | Paris' fashion elite recognized Paul Poiret at a young age for his skilled drawings; but where other designers focused on cages and corsets, his work featured draped fabric and natural silhouettes.



1913 | Igor Stravinsky's The Rite of Spring was a departure from traditional composition: The rising star abandoned harmonic consonance in favor of harsh, tense tones that incited a riot at its first performance.



Early/Mid-20th Century | In developing the Epic Theater style, dramatists like Bertolt Brecht consciously reminded audiences of the play's artifice, encouraging actors to break the fourth wall and temper the authenticity of their performance.



1964 | Sick of the utilitarianism dominant at the time, Robert Venturi designed his Vanna Venturi House to include blatantly unnecessary features—like the facade's nonsupporting arch and an interior stairway leading to nowhere—that are now hallmarks of postmodernism.



1989 | In the 1980s, Will Wright created SimCity, a cutting edge videogame. Instead of building a closed ecosystem—like most developers before him—he handed the tools over to players to map their own gamescape.



1997 | Industrial designer Hella Jongerius molded perfectly proportioned tableware, then fired it at exceedingly high temperatures, slightly deforming each piece.



2007 | The Sopranos' artfully crafted final scene built tension expertly—then shocked audiences by abruptly cutting to black just before the expected climax.


All Photos: Getty Images



Back to 13 lessons for a new era