Olympus’ New Camera Has a Sensor That Shifts to Take 40-Megapixel Photos


Olympus’s new camera has a 16-megapixel sensor that moves—when you click the shutter, it shifts laterally and vertically to create a 40-megapixel composite image.


It’s rare to encounter a camera with a feature we’ve never seen before. Autofocus speeds are absurdly fast, stabilization systems are rock-solid, and Wi-Fi features are commonplace. And Olympus’s latest mirrorless model, the OMD E-M5 Mark II, has all those attributes, as well as a weather-sealed body. But its high-resolution shooting mode that moves its 16-megapixel sensor around inside the camera to capture a 40-megapixel shot—that’s truly innovative.


According to Olympus, when you shoot with the OMD E-M5 Mark II in “Hi-Res mode,” its Micro Four-Thirds sensor moves both side to side and up and down during an 8-shot sequence. The processing engine then stitches the eight shots together to build a 40-megapixel composite image in the form of a 64MB RAW file. It’s a pretty ingenious way to create super high-resolution images without affecting the pixel density of the sensor itself.


While the E-M5 Mark II is Olympus’s new midrange interchangeable-lens offering, it matches some features of the flagship OMD E-M1 and even trumps others. The camera has a new five-axis stabilization system designed for both stills and video, and Olympus says it’s audibly quieter than the company’s previous five-axis systems.


The new camera also offers the same resistance to splashes, dust, and freezing temperatures as the E-M1 and similar continuous-shooting speeds: 10fps without autofocus enabled and 5 with AF turned on. Olympus is touting the E-M5 Mark II’s 81-point contrast-detection autofocus system as the “world’s fastest”; according to the company’s in-house tests with their own 18-40mm lens, it locks onto a subject in just 0.044 of a second. That’s really fast, but take it with a spoonful of salt. Whenever a new camera debuts, every manufacturer is quick to claim it’s faster than the others.


Many of the camera’s improvements come in video mode, which captures 1080p footage at a wider selection of frame rates and bitrates—52Mbps at 1080p/60fps and 77Mbps at 1080p/30fps. There’s also a 24fps mode for 1080p video, and the camera has touch-focus abilities on its flip-and-swivel 3-inch touchscreen. It can crank out raw video to an external monitor via HDMI.


Still photographers should be happy with the camera’s core specs and control scheme. The mechanical shutter tops out at 1/8000 of a second, ISO ramps up to 25600. When you’re looking at the screen or through the viewfinder, focus-peaking overlays are available in a range of colors. There are six customizable buttons on the camera body, making it easy to switch between shooting modes with a press (or two) of a button. To go along with the camera’s built-in Wi-Fi, there’s an updated mobile app that supports remote live view while shooting video.


In terms of price, size, and specs, this looks like a camera built squarely to compete with the Fujifilm X-T1. It’s a bit smaller and slightly cheaper: At $1,100 for the body only, it’s a C-note less than Fuji’s superb weather-sealed shooter—and with significantly better video chops.



E-cigarette exposure impairs immune responses in mouse model

In a study with mice, Johns Hopkins Bloomberg School of Public Health researchers have found that e-cigarettes compromise the immune system in the lungs and generate some of the same potentially dangerous chemicals found in traditional nicotine cigarettes.



E-cigarettes are an emerging public health health concern, as they gain popularity among current and former smokers as well as those who have never smoked, including teenagers. The perception that e-cigarettes pose little health risk is so entrenched that some smokers, including those with chronic obstructive pulmonary disease (COPD), are switching from cigarettes to e-cigarettes. (Many COPD patients continue to smoke after their diagnosis.) Both cigarettes and e-cigarettes are sources of nicotine. E-cigarettes contain less nicotine than cigarettes, but actual nicotine intake by e-cigarette users can approximate that of cigarette smokers.


The findings will be published on Feb. 4 in the journal PLOS ONE


"Our findings suggest that e-cigarettes are not neutral in terms of the effects on the lungs," notes senior author Shyam Biswal, PhD, a professor in the Department of Environmental Health Sciences at the Bloomberg School. "We have observed that they increase the susceptibility to respiratory infections in the mouse models. This warrants further study in susceptible individuals, such as COPD patients who have switched from cigarettes to e-cigarettes or to new users of e-cigarettes who may have never used cigarettes."


For their study, researchers divided the mice into two groups: one was exposed to e-cigarette vapor in an inhalation chamber in amounts that approximated actual human e-cigarette inhalation for two weeks, while the other group was just exposed to air. The researchers then divided each group into three subgroups. One received nasal drops containing Streptococcus pneumoniae, a bacteria responsible for pneumonia and sinusitis, among other illnesses, in humans. A second received nasal drops of the virus Influenza A, and the third subgroup did not receive either virus or bacteria.


The mice exposed to e-cigarette vapor were significantly more likely to develop compromised immune responses to both the virus and the bacteria, which in some cases killed the mice, the researchers found.


"E-cigarette vapor alone produced mild effects on the lungs, including inflammation and protein damage," says Thomas Sussan, PhD, lead author and an assistant scientist in the Department of Environmental Health Sciences at the Bloomberg School. "However, when this exposure was followed by a bacterial or viral infection, the harmful effects of e-cigarette exposure became even more pronounced. The e-cigarette exposure inhibited the ability of mice to clear the bacteria from their lungs, and the viral infection led to increased weight loss and death indicative of an impaired immune response."


The researchers believe this study, thought to be the first to examine animal response to e-cigarette inhalation, will serve as a model for future studies on the effects of e-cigarettes.


Since their introduction to the U.S. market in 2007, e-cigarettes have prompted debate as to their risk in general and relative to cigarettes. E-cigarettes, which at their simplest consist of a battery, an atomizer and a cartridge, produce a vapor that is inhaled and then exhaled by the user. Previous analyses of e-cigarette vapor have identified chemicals that could be toxic or carcinogenic, including particulates, formaldehyde and volatile organic compounds, but at lower levels than cigarette smoke. Another thing working in the favor of e-cigarettes in the risk continuum is that they don't combust the way cigarettes do, limiting some of the chemicals released in cigarette smoke.


As part of their study, the researchers also determined that e-cigarette vapor contains "free radicals," known toxins found in cigarette smoke and air pollution. Free radicals are highly reactive agents that can damage DNA or other molecules within cells, resulting in cell death. Cigarette smoke contains 1014 free radicals per puff. Though e-cigarette vapor contains far fewer free radicals than cigarette smoke -- one percent as much -- their presence in e-cigarettes still suggests potential health risks that merit further study, the researchers say.


"We were surprised by how high that number was, considering that e-cigarettes do not produce combustion products," Sussan says. "Granted, it's 100 times lower than cigarette smoke, but it's still a high number of free radicals that can potentially damage cells."


The U.S. Food and Drug Administration last spring announced that it was going to begin regulating e-cigarettes. E-cigarette sales are projected to overtake cigarette sales in the next decade. Teen use of e-cigarettes outpaces cigarette use, according to a recent survey released by the National Institute on Drug Abuse. And, according to the U.S. Centers for Disease Control and Prevention, more than one-quarter million teenagers who reported never having smoked a cigarette reported using e-cigarettes in 2013.


The work was supported by grants from the Flight Attendant Medical Research Institute (FAMRI) and the National Institute of Health's National Cancer Institute (R01CA140492 and P50CA058184).



New enzyme reduces sulfite in wine even faster

Sulfites are sulfurous substances that occur naturally. They are poisonous for many life forms even at small concentrations. Sulfites and sulfur dioxide are also added to wine and dried fruit as preservatives that inhibit the growth of unwanted microorganisms, increasing the shelf-life of these products. The biochemists Prof. Dr. Oliver Einsle and Dr. Bianca Hermann from the University of Freiburg have teamed up with researchers from the Technische Universität Darmstadt for a project in which they characterised a bacterial enzyme that reduces sulfite up to one hundred times faster than any other known enzyme. The researchers were able to clarify the high-resolution crystal structure of the enzyme complex MccA and the molecular details of its reaction mechanism.



Their findings explain why MccA is able to convert sulfite so quickly, meaning that this enzyme could be used more frequently in biotechnology in the future. Customised microorganisms capable of high-speed sulfite reduction could be employed for desulfurization under mild conditions, for example. The results of the scientists' research have now been published in Nature.


Some bacteria use sulfite for their metabolism -- for example, for respiration when there is no oxygen. Bacteria harvest energy by reducing sulfite to sulfide, during which they split off all oxygen atoms as water. One of the bacteria that use sulfur for respiration is Wolinella succinogenes. This organism can be found in the rumen of ruminants, where it produces MccA. MccA is a metalloprotein that contains 24 haem groups with iron ions. It also consists of three subunits, making it a trimer.


The scientists were able to decode the enzyme's structure, revealing a previously unknown active centre for sulfite reduction. This centre consists of one of the haem groups combined with a copper ion, which is bound to two residues of cysteine, a sulfurous amino acid. Through its position, the copper ion prevents sulfite ions from binding to the enzyme. Instead, sulfur dioxide binds to the active centre because it formally contains one less water molecule and thus requires less space, so to say. This initial step is what distinguishes the reduction mechanism of MccA from other enzymes that break down sulfite. MccA then reduces sulfur dioxide to sulfide through further dehydration. The scientists were able to detect sulfur dioxide and sulfur monoxide in the enzyme's structural model, from which they concluded this new mechanism.


Einsle is head of the Membrane Proteins and Metalloproteins Lab at the Institute of Biochemistry at the University of Freiburg and an associated member of the cluster of excellence BIOSS Centre for Biological Signalling Studies, also at the University of Freiburg. Hermann is a postdoctoral researcher at the Institute of Biochemistry and a member of Einsle's lab team.




Story Source:


The above story is based on materials provided by Albert-Ludwigs-Universität Freiburg . Note: Materials may be edited for content and length.



First sensor for 'crowd control' in cells

University of Groningen scientists have developed a molecular sensor to measure 'crowding' in cells, which reflects the concentration of macromolecules present. The sensor provides quantitative information on the concentration of macromolecules in bacteria and in mammalian cells. A description of the sensor and its validation was published in Nature Methods on 2 February.



Living cells are full of macromolecules like proteins and nucleic acids. This has a profound influence on the way molecules inside a cell interact. Crowding reduces diffusion, but it also means that molecules stay together more easily. For example, DNA transcription is a hundred times faster under realistic cellular conditions than under diluted laboratory conditions. Despite its importance, crowding is rarely taken into account in biochemical studies.


The main reason for this is the lack of a reliable measurement technique for crowding. So far, it was only possible to estimate crowding from the average concentration of macromolecules and average cellular volumes. The new sensor is able to measure crowding in living cells, with a resolution that allows for the visualization of intracellular differences in both time and space.


The new sensor was developed by Dr Arnold Boersma and Professor of Biochemistry Bert Poolman. They designed a protein 'spring' with fluorescent protein markers on both ends. The first marker emits a blue light when excited by laser light. This blue light in turn excites the second marker, which then emits yellow light. This transfer of resonance energy is proportional to the distance between both markers; the technique is called 'Förster resonance energy transfer' (FRET).


Macromolecules exert a mechanical pressure on the protein spring, forcing the markers closer together. A series of control experiments have ruled out that other forces (e.g. ionic strength or chemical affinity) affect the distance between both markers. Other experiments have shown that the sensor gives an accurate quantitative estimate of crowding in cells.


Artificial gene


The sensor is encoded on an artificial gene, which is expressed in the cells. Boersma and Poolman developed two versions of the gene: one for bacteria and one for mammalian cells. 'We will use this sensor to map the structure of the cytoplasm during the cell cycle', says Poolman. 'Our interest is in volume regulation, which obviously affects crowding. But it's easy to envision many other applications for the sensor.'


Information on crowding is important for Systems Biology and Synthetic Biology because the conditions inside cells influence interaction rates and affinities of biomolecules, folding rates and fold stabilities in vivo. 'We want to know how cells work, and how we can engineer designer cells', says Poolman, who is also scientific director of the Centre for Synthetic Biology at the University of Groningen. He expects the crowding sensor to contribute to a better understanding of cellular function.


The sensor was designed at the Groningen Biomolecular Sciences and Biotechnology Institute and the Zernike Institute for Advanced Materials,, University of Groningen, Groningen, the Netherlands.




Story Source:


The above story is based on materials provided by University of Groningen . Note: Materials may be edited for content and length.



Net Neutrality Won Big Today. But We Can’t Get Complacent Just Yet


Net neutrality advocates demonstrate across the street from the Comcast Center in Philadelphia, on Sept. 15, 2014.

Net neutrality advocates demonstrate across the street from the Comcast Center in Philadelphia, on Sept. 15, 2014. Matt Rourke/AP



The Drudge Report headline said it all: “NEW RULES: FCC Chair Unveils ‘Net Neutrality.'” That’s a pretty apt description of how the world sees FCC Chairman Tom Wheeler’s announcement today that he wants to change the way the nation’s telecommunications regulator views the internet.


The problem is that it’s not entirely accurate. To be sure, the FCC made a remarkable announcement—one that many pundits viewed as impossible a year ago—but it’s also easy to get caught up in the rhetoric and miss what’s really going on here.


You see, the FCC didn’t unveil net neutrality today. It has backed the idea of net neutrality for about a decade. That’s why your internet service provider doesn’t already charge you extra for running Skype or a virtual private network, or even a router.


What’s changing here is the way the FCC is classifying broadband internet. And the reason this is happening is because the courts have told the FCC that it simply can’t enforce net neutrality unless it does this. A year ago, Chairman Wheeler thought he could keep the courts happy with some regulatory jujutsu, but net neutrality lobbyists, the President, and millions of people told him otherwise.


So now Wheeler wants to turn back the clock and classify broadband the same way that DSL was classified back in the 1990s—as a regulated transmission service. When people talk about Title II, they’re really talking about a return to the way that internet service providers were originally regulated, under the Title II section of the 1934 Communications Act.


It’s true that Title II lets the FCC set rules for your internet service provider with a much firmer hand. That’s why the AT&Ts, Comcasts and Verizons of the world hate it. But the real question here is whether the FCC is going to actually use any of those powers to change much of anything. Judging from the FCC’s comments today, things could really change for wireless broadband users, but the agency isn’t really proposing to use Title II to do anything new in the wired broadband world.


So when cable industry lobbying groups such as Broadband For America argue that Wheeler’s proposal “could have spillover effects into the broader Internet ecosystem and threaten Silicon Valley companies that rely heavily on the Internet,” take that with a grain of salt.


The markets certainly weren’t alarmed. Stocks for the big telecos went up today as market-watchers were relieved that the FCC said it wasn’t going to regulate what the cable and phone companies charge us for internet access.


Chairman Wheeler says that he wants to extend net neutrality to mobile broadband, an area that’s been given a net neutrality pass to date. That’s a big change, for sure, but one that’s largely in line with the nearly 4 million pro-net-neutrality comments that the FCC has already received on this issue.


But for wired broadband carriers, it looks pretty much like we’ll return to business as usual. As Chairman Wheeler wrote in a WIRED opinion piece today, he wants to “ban paid prioritization, and the blocking and throttling of lawful content and services.” Nothing new there. He also promises the providers “no rate regulation, no tariffs, no last-mile unbundling.”


So as you read the avalanche of commentary on today’s position, keep this in mind: Title II is important because it finally gives the FCC the legal firepower to enforce net neutrality. “Title II, by itself, doesn’t accomplish much more than that,” says Corynne McSherry Intellectual Property Director with the Electronic Frontier Foundation. What’s really going to matter is wether the FCC uses that power.


“What matters are the rules themselves. That’s why it is crucial that the public stays engaged, so we can make sure the FCC does the right thing in the next few weeks, and then hold it accountable over time.”


“The telcos can play a long game,” she adds. “So we have to do the same.”



Net Neutrality Won Big Today. But We Can’t Get Complacent Just Yet


Net neutrality advocates demonstrate across the street from the Comcast Center in Philadelphia, on Sept. 15, 2014.

Net neutrality advocates demonstrate across the street from the Comcast Center in Philadelphia, on Sept. 15, 2014. Matt Rourke/AP



The Drudge Report headline said it all: “NEW RULES: FCC Chair Unveils ‘Net Neutrality.'” That’s a pretty apt description of how the world sees FCC Chairman Tom Wheeler’s announcement today that he wants to change the way the nation’s telecommunications regulator views the internet.


The problem is that it’s not entirely accurate. To be sure, the FCC made a remarkable announcement—one that many pundits viewed as impossible a year ago—but it’s also easy to get caught up in the rhetoric and miss what’s really going on here.


You see, the FCC didn’t unveil net neutrality today. It has backed the idea of net neutrality for about a decade. That’s why your internet service provider doesn’t already charge you extra for running Skype or a virtual private network, or even a router.


What’s changing here is the way the FCC is classifying broadband internet. And the reason this is happening is because the courts have told the FCC that it simply can’t enforce net neutrality unless it does this. A year ago, Chairman Wheeler thought he could keep the courts happy with some regulatory jujutsu, but net neutrality lobbyists, the President, and millions of people told him otherwise.


So now Wheeler wants to turn back the clock and classify broadband the same way that DSL was classified back in the 1990s—as a regulated transmission service. When people talk about Title II, they’re really talking about a return to the way that internet service providers were originally regulated, under the Title II section of the 1934 Communications Act.


It’s true that Title II lets the FCC set rules for your internet service provider with a much firmer hand. That’s why the AT&Ts, Comcasts and Verizons of the world hate it. But the real question here is whether the FCC is going to actually use any of those powers to change much of anything. Judging from the FCC’s comments today, things could really change for wireless broadband users, but the agency isn’t really proposing to use Title II to do anything new in the wired broadband world.


So when cable industry lobbying groups such as Broadband For America argue that Wheeler’s proposal “could have spillover effects into the broader Internet ecosystem and threaten Silicon Valley companies that rely heavily on the Internet,” take that with a grain of salt.


The markets certainly weren’t alarmed. Stocks for the big telecos went up today as market-watchers were relieved that the FCC said it wasn’t going to regulate what the cable and phone companies charge us for internet access.


Chairman Wheeler says that he wants to extend net neutrality to mobile broadband, an area that’s been given a net neutrality pass to date. That’s a big change, for sure, but one that’s largely in line with the nearly 4 million pro-net-neutrality comments that the FCC has already received on this issue.


But for wired broadband carriers, it looks pretty much like we’ll return to business as usual. As Chairman Wheeler wrote in a WIRED opinion piece today, he wants to “ban paid prioritization, and the blocking and throttling of lawful content and services.” Nothing new there. He also promises the providers “no rate regulation, no tariffs, no last-mile unbundling.”


So as you read the avalanche of commentary on today’s position, keep this in mind: Title II is important because it finally gives the FCC the legal firepower to enforce net neutrality. “Title II, by itself, doesn’t accomplish much more than that,” says Corynne McSherry Intellectual Property Director with the Electronic Frontier Foundation. What’s really going to matter is wether the FCC uses that power.


“What matters are the rules themselves. That’s why it is crucial that the public stays engaged, so we can make sure the FCC does the right thing in the next few weeks, and then hold it accountable over time.”


“The telcos can play a long game,” she adds. “So we have to do the same.”



Silk Road Mastermind Ross Ulbricht Convicted of All 7 Charges


ross-ulbricht

Courtesy Ulbricht Family.



A jury has spoken, and the mask is off: Ross Ulbricht has been convicted of being the Dread Pirate Roberts, secret mastermind of the Silk Road online narcotics empire.


On Wednesday, less than a month after his trial began in a downtown Manhattan courtroom, 30-year-old Ulbricht was convicted of all seven crimes he was charged with. But Ulbricht will almost certainly appeal the decision, given his legal team’s calls for a mistrial and frequent protests against the judge’s decisions throughout the case.


From his first pre-trial hearings in New York, the government’s evidence that Ulbricht ran the Silk Road’s billion-dollar marketplace under the pseudonym the Dread Pirate Roberts was practically overwhelming. When the FBI arrested Ulbricht in the science fiction section of a San Francisco public library in October of 2013, his fingers were literally on the keyboard of his laptop, logged into the Silk Road’s “mastermind” account. On his seized laptop’s hard drive, investigators quickly found a journal, daily logbook, and thousands of pages of private chat logs that chronicled his years of planning, creating and day-to-day running of the Silk Road. That red-handed evidence was bolstered by a college friend of Ulbricht’s who testified at trial that the young Texan had confessed creating the Silk Road to him. On top of that, notes found crumpled in his bedroom’s trashcan connected to the Silk Road’s code. Ulbricht’s guilty verdict was even further locked down by a former FBI agent’s analysis that traced $13.4 million worth of the black market’s bitcoins from the Silk Road’s servers in Iceland and Pennsylvania to the bitcoin wallet on Ulbricht laptop.


Ulbricht’s defense team quickly admitted at trial that Ulbricht had created the Silk Road. But his attorneys argued that it had been merelt an “economic experiment,” one that he quickly gave up to other individuals who grew the site into the massive drug empire the Silk Road represented at its peak in late 2013. Those purported operators of the site, including the “real” Dread Pirate Roberts, they argued, had framed Ulbricht as the “perfect fall guy.”


“The real Dread Pirate Roberts is out there,” Ulbricht’s lead attorney Joshua Dratel told the jury in opening statements.


But that dramatic alternative theory never produced a credible explanation of the damning evidence found on Ulbricht’s personal computer. The defense was left to argue that Ulbricht’s laptop had been hacked, and voluminous incriminating files injected into the computer—perhaps via a Bittorrent connection he was using to download an episode of the Colbert Report at the time of his arrest. In their closing arguments, prosecutors called that story a “wild conspiracy theory” and a “desperate attempt to create a smokescreen.” It seems the jury agreed.


Despite the case’s grim outcome for Ulbricht, his defense team seemed throughout the trial to be laying the grounds for an appeal. His lead attorney Joshua Dratel called for a mistrial no less than five times, and was rejected by the judge each time. Dratel’s protests began with pre-trial motions to preclude a large portion of the prosecution’s evidence based on what he described as an illegal, warrantless hack of the Silk Road’s Icelandic server by FBI investigators seeking to locate the computer despite its use of the Tor anonymity software. As the trial began, Dratel butted heads with the prosecution and judge again on the issue of cross-examining a Department of Homeland Security witness on the agency’s alternative suspects in the case, including bitcoin mogul and Mt. Gox CEO Mark Karpeles. And in the last days of the trial, Dratel strongly objected again to a decision by the judge to disallow two of the defense’s expert witnesses based on a lack of qualifications.


Even so, the case’s decision will no doubt be seen by many as U.S. law enforcement striking a significant blow against the dark web’s burgeoning drug trade. More broadly, the case represents the limits of cryptographic anonymity tools like Tor and bitcoin against the surveillance powers of the U.S. government. In spite of his use of those crypto tools and others, Ulbricht couldn’t prevent the combined efforts of the FBI, DHS, and IRS from linking his pseudonym to his real-world identity.


But Ulbricht will nonetheless be remembered not just for his conviction, but also for ushering in a new age of online black markets. Today’s leading dark web drug sites like Agora and Evolution offer more narcotics listings than the Silk Road ever did, and have outlived law enforcement’s crackdown on their competitors. Tracking down and prosecuting those new sites’ operators, like prosecuting Ulbricht, will likely require the same intense, multi-year investigations by three-letter agencies.


If the feds do find the administrators of the next generation of dark web drug sites, as they found Ulbricht, don’t expect those online drug lords to let their unencrypted laptops be snatched in a public library, or to have kept assiduous journals of their criminal conspiracies. The Dread Pirate Roberts’ successors have no doubt been watching his trial unfold and learning from his mistakes. And the next guilty verdict may not be so easy.


This story will be updated as we learn more about the fallout of the verdict in Ulbricht’s case.



Like All Great Innovators, Amazon’s Bezos Unfazed By Recent Failures


Amazon CEO Jeff Bezos.

Amazon CEO Jeff Bezos. Jim Merithrew/WIRED



In many ways, 2014 turned out to be the year of the smartphone. Headlined by the release of Samsung’s water-resistant, feature-filled Galaxy S5 and Apple’s new “phablet,” the iPhone 6 Plus, the past year produced a slate of big winners in the mobile industry. It also produced several huge flops.


Amazon’s Fire Phone, for example, was one of the big losers of 2014. After launching in July to paltry sales numbers, the Fire Phone quickly dropped its $200 price tag (subsidized by mobile carrier AT&T) in favor of a September price of only $.99. Unfortunately, poor sales numbers continued from there. In October, Amazon announced $437 million in Q3 losses and an $83 million inventory of unsold Fire Phones. A $170 million expense write off came next. Finally, Amazon launched a last-ditch sale just before December 2014, practically giving away unlocked Fire Phones for $199, a full $250 lower than their original price tag. In short, the phone was a huge flop.


But Amazon CEO Jeff Bezos – who personally oversaw the entire Fire Phone project – is not discouraged. Instead, he remains optimistic (to a fault, according to some,) pushing forward in the way that entrepreneurs like Steve Jobs, Richard Branson, and today’s “it” entrepreneur, Elon Musk, have.


Call it a lesson in perseverance, but I recently took a look at what happens to successful entrepreneurs when they hit a bump in the road. Here are a few things that stuck out.


Jeff Bezos



“I’ve made billions of dollars of failures at Amazon.com. Literally.”



Bezos, the founder and long-time CEO of Amazon.com, has overseen huge successes in the past two decades including the Amazon Marketplace, Amazon Prime, Kindle, and current leading cloud provider Amazon Web Services. Considering that Amazon is going up against the titans of several industries like Walmart, Netflix, Google, Microsoft, and Dell, Bezos’s success has been even more impressive. The road to those triumphs, however, has not been a smooth one.


Back in 2000, Bezos invested $60 million dollars into Kozmo.com — a shampoo delivery service. That company didn’t last very long. Then, in 2012, Bezos lost $175 million through the purchase of LivingSocial, the main competitor to daily deal giant Groupon. Bezos isn’t shy about these failures, continuing to believe that innovation in spite of failure is the path to success, and that it only takes one big success to make up for dozens of failures.


Bezos is currently worth nearly $30 billion.


Steve Jobs



“I’m the only person I know that’s lost a quarter of a billion dollars in one year…It’s very character-building.”



The founder and creative force of Apple needs no introduction. Nearly everyone has some experience with one of Steve Jobs’s groundbreaking products, a list that includes the iPhone, iPod, iPad, and Apple line of computers. But like Bezos, Jobs has his own list of failures. For one, he was fired from the company he founded by the man he personally hired as CEO. After he was forced from Apple, he pushed forward with a new venture, NeXT computer, which failed to reach the commercial success he saw at Apple.


Eventually Jobs returned to Apple, and the rest is history. But not every Apple product he created was a commercial triumph. His list of products failures includes the Apple Lisa, Apple III, ROKR, Macintosh TV, and MobileMe. In the end, though, Jobs is most remembered for his innovative products, devotion to design, and sometimes abrasive personality.


Jobs was worth $8.3 billion at the time of his death.


Richard Branson



“If you’re hurt, lick your wounds and get up again. If you’ve given it your absolute best, it’s time to move forward.”



The man behind international brand Virgin, Richard Branson has launched upwards of 100 different companies in his lifetime. So it stands to reason that some of those ventures turned out to be less than successful. While Branson receives praise for successful commercial ventures like Virgin Airlines, a slew of far less successful Virgin companies have been largely easily forgotten.


In the early 1990s, Branson launched Virgin Cola in an attempt to infiltrate the soft drink market. After 10 years and with only a 0.5% market share, Virgin Cola finally closed shop. Virgin Cars was launched in 2000, but only sold 12,000 cars in three years.


Branson has also pushed the Virgin brand into makeup, wedding dresses, lingerie, and flowers. There was even a Virgin-powered social network, along with an attempt to rival iTunes with Virgin Pulse. All of these ventures met with little success. Branson has always looked at these failures as learning experiences, and continues to push forward, as evidenced by a recent investment in the race to create a worldwide ISP.


Branson’s current net worth is $4.9 billion.


Elon Musk



“Failure is an option here. If things are not failing, you are not innovating enough.”



Successes with both Tesla Motors and SpaceX have gained huge notoriety for entrepreneur Elon Musk, but the two companies both had brushes with bankruptcy. In 2008, an unsettled Musk was faced with a huge decision. He would have to split his limited funds in an effort to keep both companies going, or concentrate all of his money on one venture, letting the other fail completely. The decision was incredibly difficult for Musk, as both companies were nearly bankrupt at the time. The choice became so challenging, in fact, that it nearly caused Musk to suffer a nervous breakdown.


In the end, the Tesla and SpaceX CEO made the decision to fund both projects. Not long afterwards, both companies received huge rounds of funding, keeping them afloat and fueling the major success stories we’re all now familiar with. And at the rate he’s going, Elon Musk has the chance to be one of the most influential innovators of the 21st Century.


Musk is now worth $11.7 billion.


Tayven James is a Utah-based husband, father, and tech fan who loves to discover and opine about what’s new in the world.



Like All Great Innovators, Amazon’s Bezos Unfazed By Recent Failures


Amazon CEO Jeff Bezos.

Amazon CEO Jeff Bezos. Jim Merithrew/WIRED



In many ways, 2014 turned out to be the year of the smartphone. Headlined by the release of Samsung’s water-resistant, feature-filled Galaxy S5 and Apple’s new “phablet,” the iPhone 6 Plus, the past year produced a slate of big winners in the mobile industry. It also produced several huge flops.


Amazon’s Fire Phone, for example, was one of the big losers of 2014. After launching in July to paltry sales numbers, the Fire Phone quickly dropped its $200 price tag (subsidized by mobile carrier AT&T) in favor of a September price of only $.99. Unfortunately, poor sales numbers continued from there. In October, Amazon announced $437 million in Q3 losses and an $83 million inventory of unsold Fire Phones. A $170 million expense write off came next. Finally, Amazon launched a last-ditch sale just before December 2014, practically giving away unlocked Fire Phones for $199, a full $250 lower than their original price tag. In short, the phone was a huge flop.


But Amazon CEO Jeff Bezos – who personally oversaw the entire Fire Phone project – is not discouraged. Instead, he remains optimistic (to a fault, according to some,) pushing forward in the way that entrepreneurs like Steve Jobs, Richard Branson, and today’s “it” entrepreneur, Elon Musk, have.


Call it a lesson in perseverance, but I recently took a look at what happens to successful entrepreneurs when they hit a bump in the road. Here are a few things that stuck out.


Jeff Bezos



“I’ve made billions of dollars of failures at Amazon.com. Literally.”



Bezos, the founder and long-time CEO of Amazon.com, has overseen huge successes in the past two decades including the Amazon Marketplace, Amazon Prime, Kindle, and current leading cloud provider Amazon Web Services. Considering that Amazon is going up against the titans of several industries like Walmart, Netflix, Google, Microsoft, and Dell, Bezos’s success has been even more impressive. The road to those triumphs, however, has not been a smooth one.


Back in 2000, Bezos invested $60 million dollars into Kozmo.com — a shampoo delivery service. That company didn’t last very long. Then, in 2012, Bezos lost $175 million through the purchase of LivingSocial, the main competitor to daily deal giant Groupon. Bezos isn’t shy about these failures, continuing to believe that innovation in spite of failure is the path to success, and that it only takes one big success to make up for dozens of failures.


Bezos is currently worth nearly $30 billion.


Steve Jobs



“I’m the only person I know that’s lost a quarter of a billion dollars in one year…It’s very character-building.”



The founder and creative force of Apple needs no introduction. Nearly everyone has some experience with one of Steve Jobs’s groundbreaking products, a list that includes the iPhone, iPod, iPad, and Apple line of computers. But like Bezos, Jobs has his own list of failures. For one, he was fired from the company he founded by the man he personally hired as CEO. After he was forced from Apple, he pushed forward with a new venture, NeXT computer, which failed to reach the commercial success he saw at Apple.


Eventually Jobs returned to Apple, and the rest is history. But not every Apple product he created was a commercial triumph. His list of products failures includes the Apple Lisa, Apple III, ROKR, Macintosh TV, and MobileMe. In the end, though, Jobs is most remembered for his innovative products, devotion to design, and sometimes abrasive personality.


Jobs was worth $8.3 billion at the time of his death.


Richard Branson



“If you’re hurt, lick your wounds and get up again. If you’ve given it your absolute best, it’s time to move forward.”



The man behind international brand Virgin, Richard Branson has launched upwards of 100 different companies in his lifetime. So it stands to reason that some of those ventures turned out to be less than successful. While Branson receives praise for successful commercial ventures like Virgin Airlines, a slew of far less successful Virgin companies have been largely easily forgotten.


In the early 1990s, Branson launched Virgin Cola in an attempt to infiltrate the soft drink market. After 10 years and with only a 0.5% market share, Virgin Cola finally closed shop. Virgin Cars was launched in 2000, but only sold 12,000 cars in three years.


Branson has also pushed the Virgin brand into makeup, wedding dresses, lingerie, and flowers. There was even a Virgin-powered social network, along with an attempt to rival iTunes with Virgin Pulse. All of these ventures met with little success. Branson has always looked at these failures as learning experiences, and continues to push forward, as evidenced by a recent investment in the race to create a worldwide ISP.


Branson’s current net worth is $4.9 billion.


Elon Musk



“Failure is an option here. If things are not failing, you are not innovating enough.”



Successes with both Tesla Motors and SpaceX have gained huge notoriety for entrepreneur Elon Musk, but the two companies both had brushes with bankruptcy. In 2008, an unsettled Musk was faced with a huge decision. He would have to split his limited funds in an effort to keep both companies going, or concentrate all of his money on one venture, letting the other fail completely. The decision was incredibly difficult for Musk, as both companies were nearly bankrupt at the time. The choice became so challenging, in fact, that it nearly caused Musk to suffer a nervous breakdown.


In the end, the Tesla and SpaceX CEO made the decision to fund both projects. Not long afterwards, both companies received huge rounds of funding, keeping them afloat and fueling the major success stories we’re all now familiar with. And at the rate he’s going, Elon Musk has the chance to be one of the most influential innovators of the 21st Century.


Musk is now worth $11.7 billion.


Tayven James is a Utah-based husband, father, and tech fan who loves to discover and opine about what’s new in the world.



Rapid and unexpected weight gain after fecal transplant

A woman successfully treated for a recurrent Clostridium difficile infection with stool from an overweight donor rapidly gained weight herself afterwards, becoming obese, according to a case report published in the new journal Open Forum Infectious Diseases.



Fecal microbiota transplant (FMT) is a promising treatment for relapsing C. difficile infections, a common cause of antibiotic-related diarrhea that in severe cases may be life-threatening. The case suggests that clinicians should avoid selecting stool donors who are overweight. The report also raises questions about the role of gut bacteria in metabolism and health.


At the time of the woman's fecal transplant in 2011, her weight was stable at 136 pounds, and her Body Mass Index (BMI) was 26. Then 32 years old, she had always been of normal weight. The transplant used donor stool from the woman's overweight but otherwise healthy teenage daughter, administered via colonoscopy, to restore a healthy balance of bacteria in the woman's gut, curing her C. difficile infection.


Sixteen months later, the woman weighed 170 pounds, and her BMI was 33, meeting medical criteria for obesity. The weight gain persisted despite a medically supervised liquid protein diet and exercise program. Continuing efforts to diet and exercise did not lower her weight: Three years after the transplant, she weighed 177 pounds with a BMI of 34.5, and she remains obese today.


"We're questioning whether there was something in the fecal transplant, whether some of those 'good' bacteria we transferred may have had an impact on her metabolism in a negative way," said Colleen R. Kelly, MD, of the Warren Alpert Medical School of Brown University, who wrote the case report with Neha Alang, MD, of Newport Hospital in Rhode Island. Such a link between bacteria in the gastrointestinal tract and weight is supported by previously published animal studies, where transfer of gut bacteria from obese to normal-weight mice can lead to a marked increase in fat. In light of the case and the animal data, the authors recommend selecting stool donors who are not overweight for fecal transplants.


Importantly, the FMT was not the only possible cause of the woman's weight gain. In addition to treatment for C. difficile, she had also been treated with several antibiotics for Helicobacter pylori infection. Other possible contributing factors in the woman's weight gain include the resolution of her C. difficile infection, genetic factors, aging, and stress related to illness. However, as noted above, she had never been overweight before.


The case raises many questions about donor selection and highlights the importance of studying long-term outcomes of FMT, according to Ana A. Weil, MD, and Elizabeth L. Hohmann, MD, both of Massachusetts General Hospital, who wrote a related editorial.


"Careful study of FMT will advance knowledge about safe manipulation of the gut microbiota," they wrote. "Ultimately, of course, it is hoped that FMT studies will lead to identification of defined mixtures of beneficial bacteria that can be cultured, manufactured, and administered to improve human health."


Fast Facts



  • Fecal transplants are a promising approach for treating recurrent C. difficile infections, a common cause of potentially life-threatening diarrhea.

  • In this case report, a woman successfully treated for a relapsing C. difficile infection with a fecal transplant rapidly became overweight for the first time in her life. The stool donor, the woman's daughter, was overweight.

  • The report suggests that donor screening for these transplants should exclude those who are overweight.




Story Source:


The above story is based on materials provided by Infectious Diseases Society of America . Note: Materials may be edited for content and length.



New Eruption Started at Piton de la Fournaise


A lava flow on Piton de la Fournaise seen in the Piton de Bert webcam, taken February 4, 2015. Photo by OVPF.

A lava flow on Piton de la Fournaise seen in the Piton de Bert webcam, taken February 4, 2015. Photo by OVPF.



A new eruption has started on remote Reunion Island, where Piton de la Fournaise is now producing a lava flow that is snaking down its slopes (see above). The new eruption is coming from the southern side of the volcano and the Observatoire Volcanologique du Piton de la Fournaise webcams show a lava from from Piton de Bert, one of the peaks of Piton de la Fournaise. A swarm of earthquakes preceded this eruption, with at least a few hundred recorded earlier on Wednesday (2/4). Poor visibility has hampered volcanologists trying to see what is happening, but it appears that the eruption is easier to see now on those OVPF webcams (see below).


This is the second eruption in the past year at Piton de la Fournaise after 3 years of quiet. That eruption only lasted a few hours and the volcano was recently lowered from a volcanic alert status. Similar to Kilauea, the volcano mainly produces lava flows, although the lava at Piton de la Fournaise is less viscous that at Kilauea thanks to the higher alkali content (sodium, potassium, calcium) in the lava.


The glow from the new eruption from Piton de la Fournaise, seen the the Picon Basaltes webcam on February 4, 2015. Photo by OVPF.

The glow from the new eruption from Piton de la Fournaise, seen the the Picon Basaltes webcam on February 4, 2015. Photo by OVPF.



I’ll add more details as they become available.



The Upside of Artificial Intelligence Development


robots_660

jeffedoe/Flickr



In “Practical Artificial Intelligence Is Already Changing the World,” I promised to write a follow-on article that discussed why Kevin Kelly (@kevin2kelly), the founding executive editor of Wired magazine, and Irving Wladawsky-Berger, a former IBM employee and strategic advisor to Citigroup, are optimistic about the future of artificial intelligence (AI). In that article I noted that some pundits believe that AI poses a grave threat to humanity while other pundits believe that AI systems are going to be tools that humans can use to improve conditions around them. I also wrote that it would be foolish to predict which school of thought is correct this early in the game.


In the near-term, however, I predicted that those who believe that AI systems are tools to be used by humans are going to be proven correct. Irving Wladawsky-Berger is firmly in that camp and he believes that Kevin Kelly is as well. “What should we expect from this new generation of AI machines and applications?” asks Wladawsky-Berger. “Are they basically the next generation of sophisticated tools enhancing our human capabilities, as was previously the case with electricity, cars, airplanes, computers and the Internet? Or are they radically different from our previous tools because they embody something as fundamentally human as intelligence? Kevin Kelly — as am I — is firmly in the AI-as-a-tool camp.” [“The Future of AI: An Ubiquitous, Invisible, Smart Utility,” The Wall Street Journal, 21 November 2014]M


Wladawsky-Berger bases his conclusion about Kevin Kelly’s beliefs about artificial intelligence (AI) from what Kelly wrote in an article in Wired Magazine. [“The Three Breakthroughs That Have Finally Unleashed AI on the World,” Wired, 27 October 2014] In that article, Kelly writes about IBM’s Watson system and how it is transforming as it learns and about all of the good things that cognitive computing systems can do now and will do in the future. He continues:



“Amid all this activity, a picture of our AI future is coming into view, and it is not the HAL 9000 — a discrete machine animated by a charismatic (yet potentially homicidal) humanlike consciousness — or a Singularitan rapture of superintelligence. The AI on the horizon looks more like Amazon Web Services — cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need. Like all utilities, AI will be supremely boring, even as it transforms the Internet, the global economy, and civilization. It will enliven inert objects, much as electricity did more than a century ago. Everything that we formerly electrified we will now cognitize. This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ. In fact, the business plans of the next 10,000 startups are easy to forecast: Take X and add AI. This is a big deal, and now it’s here.”



[ Related on Insights: Google and Elon Musk to Decide What Is Good for Humanity ]


Unlike the dire warnings that have filled news outlets over the past year, Kelly’s view of the future of AI is not only optimistic it’s almost joyous. Wladawsky-Berger and Kelly are not alone in their optimism about AI’s future. Timothy B. Lee (), senior editor at @voxdotcom, also believes that the up side of artificial intelligence will far outweigh the risks of developing it further. [“Will artificial intelligence destroy humanity? Here are 5 reasons not to worry.Vox, 15 January 2015] Lee believes the naysayers “overestimate the likelihood that we’ll have computers as smart as human beings and exaggerate the danger that such computers would pose to the human race. In reality, the development of intelligent machines is likely to be a slow and gradual process, and computers with superhuman intelligence, if they ever exist, will need us at least as much as we need them.” Even though Kelly is optimistic about the future of AI, he doesn’t dismiss the cautions being raised about how it’s developed. He writes, “As AIs develop, we might have to engineer ways to prevent consciousness in them — our most premium AI services will be advertised as consciousness-free.” Kelly’s big concern about AI’s future is who will control the systems we use. He explains:



“Cloud-based AI will become an increasingly ingrained part of our everyday life. But it will come at a price. Cloud computing obeys the law of increasing returns, sometimes called the network effect, which holds that the value of a network increases much faster as it grows bigger. The bigger the network, the more attractive it is to new users, which makes it even bigger, and thus more attractive, and so on. A cloud that serves AI will obey the same law. The more people who use an AI, the smarter it gets. The smarter it gets, the more people use it. The more people that use it, the smarter it gets. Once a company enters this virtuous cycle, it tends to grow so big, so fast, that it overwhelms any upstart competitors. As a result, our AI future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial intelligences.”



That concern aside, Kelly believes that AI will help make humans smarter and more effective. He notes, for example, that AI chess programs have helped make human chess players much better. He adds, “If AI can help humans become better chess players, it stands to reason that it can help us become better pilots, better doctors, better judges, better teachers.” In other words, Kelly sees AI as tool that can help mankind get better not a threat that is going to destroy mankind. He continues:



“Most of the commercial work completed by AI will be done by special-purpose, narrowly focused software brains that can, for example, translate any language into any other language, but do little else. Drive a car, but not converse. Or recall every pixel of every video on YouTube but not anticipate your work routines. In the next 10 years, 99 percent of the artificial intelligence that you will interact with, directly or indirectly, will be nerdily autistic, supersmart specialists. In fact, this won’t really be intelligence, at least not as we’ve come to think of it. Indeed, intelligence may be a liability — especially if by ‘intelligence’ we mean our peculiar self-awareness, all our frantic loops of introspection and messy currents of self-consciousness.”



I agree that with that assessment. Derrick Harris (), a senior writer at Gigaom, asserts that the fact of the matter is that artificial intelligence (at least the narrow kind) is here, is real, and is getting better. [“Artificial intelligence is real now and it’s just getting started,” Gigaom, 9 January 2015] He explains:



“Artificial intelligence is already very real. Not conscious machines, omnipotent machines or even reasoning machines (yet), but statistical machines that automate and increasingly can outperform humans at certain pattern-recognition tasks. Computer vision, language understanding, anomaly detection and other fields have made immense advances in the past few years. All this work will be the stepping stones for future AI systems that, decades from now, might perform feats we’ve only imagined computers could perform.”



Will artificial intelligence be disruptive? Of course it will. It will change the employment landscape in major ways, displacing millions of workers who thought their jobs were safe. Will it create new jobs? Certainly. In fact, I suspect that entire new business sectors are going to be developed as a result of AI. The question will be whether enough new jobs can be created to replace those that have been taken over by smart machines. Will artificial general intelligence be developed? Maybe. And if AGI is developed, that’s where caution needs to be taken. Is humankind in danger? Not at the moment. Narrow uses of AI are going to help humans do a lot of amazing things in the years ahead. In fact, we will wonder how we ever lived without it.


Stephen F. DeAngelis is President and CEO of the cognitive computing firm Enterra Solutions.



Silicon Valley’s Biggest Worry Should Be Inequality, Not a Bubble


gender-gap-ft

Oivind Hovland/Getty



Silicon Valley’s economy may be booming, but according to a new report, in recent years, the region has also “lost its spine”—that spine being the middle class.


In its new Silicon Valley Index, the research organization Joint Venture Silicon Valley delivered a glowing report of the Valley’s economy in 2014, from its high rate of job growth to the cushy venture capital environment to the rush of patent activity. The region is strong economically, the report argues, and will be for “the foreseeable future.”


But while the news may be heartening to those worried that the tech industry is in the middle of a bubble, it may be frightening to anyone who wants to see more equality between races, genders, and the haves and have nots in the Valley.


“The growth is uneven,” Russell Hancock, president and CEO of Joint Venture Silicon Valley, writes in the report. “Though we’re proliferating high-wage and low-wage jobs, we’re steadily losing share in the middle. It’s as if the economy has lost its spine, and this has important implications for the kind of community we become.”


Silicon Valley added roughly 58,000 jobs in 2014, a 4.1 percent increase from the year before. The nationwide job growth rate, by contrast, was 1.8 percent. And yet, even as the number of jobs in the area increases, the wage gap between low-wage and high-wage jobs is ever widening. In 2014, the difference in median income between high-wage and low-wage workers was around $92,000, compared to around $70,000 elsewhere in the Bay Area. Meanwhile, the number of mid-wage jobs in Silicon Valley has also declined 4.5 percent since 2001, reflecting a larger trend across the country.


The report also lays out in stark detail the vast difference between what people of different genders and races are paid in the Valley. Men in the region make up to 61 percent more than their female peers, the report notes, a number that “is more pronounced in Silicon Valley than in San Francisco, California, or the United States, and is getting larger over time.”


The same goes for black and Latino employees. According to the report, the wage gap between white employees (the highest earning racial group) and black and Latino employees (the lowest earning groups) is around $40,000 and $44,000, respectively. In the rest of the country, that gap is around $18,000.


For Elise Gould, a senior economist at the Economic Policy Institute, one particularly significant finding of the study is the change in per capita income by race between 2007 and 2013. Since 2007, black Silicon Valley residents have experienced a nearly 21-percent drop in per capita income, compared with a 4.9-percent drop for the same group throughout the rest of the country. Latino citizens in the Valley have also been hit hard, with a nearly 12-percent drop in per capita income, compared with a 7.5 percent drop in other parts of the country. Meanwhile, white employees in the Valley have experienced a .2-percent increase.


“That is really a striking finding that the loss of income in the Valley was greater than the loss of U.S. income as a whole for both black and Latino workers,” she says. “The disparities between racial and ethnic groups appear to be widening faster in this area than they are in the U.S. as a whole.”


Gould notes that one way to fix this inequality would be to make it easier for low-level employees to form unions and negotiate their wages collectively. “Whatever policies can boost the bargaining power of workers at the bottom, that can help reduce the income inequality you’re seeing,” she says.


Reducing that inequality, she argues, is critical to the ongoing economic recovery in the Valley. And it is very much ongoing, she warns. It’s easy to celebrate 4.1-percent job growth, and $7.3 billion in venture capital investment. But it’s equally important to realize that median income in the Valley is still lower than it was before the Great Recession. That, Gould says, has implications not only on people living in the Valley, but for the many companies launching and working there.


“When people don’t have enough money in their pockets to go spend on things they need, then you can’t spur more economic activity,” she says. “There hasn’t been great growth at all in income, which means it’s still very early in the recovery.”



FCC Chairman Tom Wheeler: This Is How We Will Ensure Net Neutrality


Federal Communication Commission(FCC) Chairman Tom Wheeler waits for a hearing at the FCC December 11, 2014 in Washington, DC.

Federal Communication Commission(FCC) Chairman Tom Wheeler waits for a hearing at the FCC December 11, 2014 in Washington, DC. Brendan Smialowski/AFP/Getty Images



After more than a decade of debate and a record-setting proceeding that attracted nearly 4 million public comments, the time to settle the Net Neutrality question has arrived. This week, I will circulate to the members of the Federal Communications Commission (FCC) proposed new rules to preserve the internet as an open platform for innovation and free expression. This proposal is rooted in long-standing regulatory principles, marketplace experience, and public input received over the last several months.


Broadband network operators have an understandable motivation to manage their network to maximize their business interests. But their actions may not always be optimal for network users. The Congress gave the FCC broad authority to update its rules to reflect changes in technology and marketplace behavior in a way that protects consumers. Over the years, the Commission has used this authority to the public’s great benefit.



Tom Wheeler


Tom Wheeler is the Chairman of the Federal Communications Commission.



The internet wouldn’t have emerged as it did, for instance, if the FCC hadn’t mandated open access for network equipment in the late 1960s. Before then, AT&T prohibited anyone from attaching non-AT&T equipment to the network. The modems that enabled the internet were usable only because the FCC required the network to be open.


Companies such as AOL were able to grow in the early days of home computing because these modems gave them access to the open telephone network.


I personally learned the importance of open networks the hard way. In the mid-1980s I was president of a startup, NABU: The Home Computer Network. My company was using new technology to deliver high-speed data to home computers over cable television lines. Across town Steve Case was starting what became AOL. NABU was delivering service at the then-blazing speed of 1.5 megabits per second—hundreds of times faster than Case’s company. “We used to worry about you a lot,” Case told me years later.


But NABU went broke while AOL became very successful. Why that is highlights the fundamental problem with allowing networks to act as gatekeepers.


While delivering better service, NABU had to depend on cable television operators granting access to their systems. Steve Case was not only a brilliant entrepreneur, but he also had access to an unlimited number of customers nationwide who only had to attach a modem to their phone line to receive his service. The phone network was open whereas the cable networks were closed. End of story.


The phone network’s openness did not happen by accident, but by FCC rule. How we precisely deliver that kind of openness for America’s broadband networks has been the subject of a debate over the last several months.


Originally, I believed that the FCC could assure internet openness through a determination of “commercial reasonableness” under Section 706 of the Telecommunications Act of 1996. While a recent court decision seemed to draw a roadmap for using this approach, I became concerned that this relatively new concept might, down the road, be interpreted to mean what is reasonable for commercial interests, not consumers.


That is why I am proposing that the FCC use its Title II authority to implement and enforce open internet protections.


Using this authority, I am submitting to my colleagues the strongest open internet protections ever proposed by the FCC. These enforceable, bright-line rules will ban paid prioritization, and the blocking and throttling of lawful content and services. I propose to fully apply—for the first time ever—those bright-line rules to mobile broadband. My proposal assures the rights of internet users to go where they want, when they want, and the rights of innovators to introduce new products without asking anyone’s permission.


All of this can be accomplished while encouraging investment in broadband networks. To preserve incentives for broadband operators to invest in their networks, my proposal will modernize Title II, tailoring it for the 21st century, in order to provide returns necessary to construct competitive networks. For example, there will be no rate regulation, no tariffs, no last-mile unbundling. Over the last 21 years, the wireless industry has invested almost $300 billion under similar rules, proving that modernized Title II regulation can encourage investment and competition.


Congress wisely gave the FCC the power to update its rules to keep pace with innovation. Under that authority my proposal includes a general conduct rule that can be used to stop new and novel threats to the internet. This means the action we take will be strong enough and flexible enough not only to deal with the realities of today, but also to establish ground rules for the as yet unimagined.


The internet must be fast, fair and open. That is the message I’ve heard from consumers and innovators across this nation. That is the principle that has enabled the internet to become an unprecedented platform for innovation and human expression. And that is the lesson I learned heading a tech startup at the dawn of the internet age. The proposal I present to the commission will ensure the internet remains open, now and in the future, for all Americans.



Into Vanuatu’s Volcanic Wonderland





The South Pacific island nation of Vanuatu, a short hop west of Fiji, is a verdant wonderland, the very embodiment of a tropical paradise. Palm fringed beaches look out toward blue waters that host vibrant ecosystems, while jungles thick with ferns and technicolor flowers burst from the fertile soil.

But as I trek through the thick forest and crest a hill, sweat dripping from my forehead and legs protesting the entire endeavor, I see something in the distance that is very much off-brand. At the heart of this tropical paradise – indeed, the cause of its creation – is a network of powerful volcanoes that comprise the southwestern edge of the notorious Ring of Fire. It’s not all Mai Tais and paragliding in Vanuatu, and the thick plume of volcanic gases in the distance was undeniable proof.


I found myself in this remote corner of the globe, staring at evidence of powerful, planet forming forces, as the chief scientist of an ambitious expedition to descend into Marum Crater and stand on the shore of its infamous lava lake*. Rappelling hundreds of meters down, toward a gurgling cauldron of molten rock doesn’t exactly correspond with well-evolved self preservation instincts, but the scientific upside was appealing. At the bottom of Marum Crater, new rock was constantly being created, while toxic sulfur dioxide gas swirled and mineralogical deposits created a colorful palette of stained rock. As a geobiologist interested in microbiological adaptations to energetically extreme and biochemically exotic sites, I was intrigued.


But getting to this alluring sampling point was not easy, and three hours into the humid hike toward the Marum Crater rim, exhaustion was starting to set in. 40 hours earlier, I had scampered out of the Moscone Convention Center in San Francisco – where 20,000+ plaid-clad geoscientists were gathered for the annual American Geophysical Union conference – and boarded a plane for the South Pacific. It was a gradual retreat from civilization: first Fiji, with concentric rinds of resorts ringing the beaches, then Port Vila, Vanuatu’s leafy capital that swells with capitalistic fever when a cruise ship drops anchor, and finally Ambrym Island, with its grass airstrip and 60 square-foot “airport”.


It was there that I met Moses, a soft-spoken, deliberate man who summoned one of the four vehicles on the eastern side of the island to transport me to the village of Endu. (When I next saw Moses, about a week later, he would be sporting ceremonial dress in his capacity as village chief.) The road was distinguishable as a thick horizon of ferns, grasses, and small trees – as opposed to the utterly impenetrable curtain of green that extended in all other directions.


From Endu, the trek began, first along the black sand beach, then upward and into the forest. I learned the hard way that hazards were both below (moss-slickened tree roots) and above (fishing net-sized spider webs). Fortunately, “Ambrym has no poisonous spiders,” my guide Solomon continued to remind me, unaware that even harmless arachnids could be off-putting when they were the size of your hand. “Nothing can harm you here.”


Except, of course, for the 4200-foot tall volcano that was fuming in the distance, and whose very active past was evidenced by the basalt we had been hiking over for the better part of the morning. First-hand accounts of previous eruptions go back more than a century: grainy images taken just offshore show clouds of ash, black-and-white placidity masking the power of the explosions that formed the island. The 1913 eruptive events put Ambrym on the map – literally and figuratively – by extending the westward edge of the island and offering volcanologists an intriguing case study of a fissure eruption. A 19-km rift opened up, spewing ash and gushing lava, which flowed into the hissing sea and forced a hasty evacuation of the nearby missionary hospital (1). Today, the scars of 1913 are hidden beneath the jungle.


The tortured geologic history of the rest of the island can be read through the black and green landscape. As we came down from the ridge that offered the first view of Marum’s plume, we stepped onto a lava river, frozen in place in 1989 and now punctuated by a few ambitious shrubs. We plunged back into the forest, spider web radar re-activated, to cross deposits from eruptions in the early 1900s, emerging onto a hummocky expanse of tall grasses and beautiful orchids – the current inhabitants of lava flows from the 1960s. Ecological succession on Ambrym is textbook-clear – uninhibited by potential environmental limitations like rainfall – and the force with which we wielded the machete was proportional to the age of the terrain we walked over.


Past the orchids and the most perfect cinder cone I’d ever seen (location scouts take note), we arrived at camp – half a dozen tents perched on the rim of Marum Crater. The camp looked like a fortress, complete with moats and wind-battered flags that provided evidence of the torrential downpour I had narrowly missed. The barren basaltic rim was an incongruous no-man’s land: look south, and you see one of the most lush places on the planet. Look down, and you see thin needles of volcanic glass (known as “Pele’s hair”) scattered on top of small ash spherules and crushed basaltic rocks, unmistakable evidence of recent volcanic activity.


But look north, over the sheer cliff of Marum’s rim and down into the multi-colored crater, and you see something utterly unfamiliar. A violent orange glow, a fluorescent punch that I hadn’t thought was on the possible spectrum of natural hues, churns blebs of rock skyward before consuming them once more. The volcano is mesmerizing, and the heat can be felt even from the rim, 1200 feet above the fiery pit.


The lava lake of Marum is also a geological conundrum. Most volcanic eruptions are short-term events that quickly equilibrate an energetic imbalance. Only about 1% of eruptions persist for more than a decade (2); Marum has been active for at least the last 15 years, according to close observers (3). Understanding where the lava is sourced from – a deep mantle-based reservoir or a shallower repository that may extend laterally across Ambrym – may help clarify how these features form and remain so incredibly active.


After setting up my tent-based laboratory (dirt-encrusted clothes in one corner, sealed sterile tubes for biological samples in the other), I step outside and notice that half the sky is lit up like a nightlight. The volcano’s gaseous plume extends upward into the night, illuminated from below like a mile-tall, convecting lantern. The nearly full moon glints off the water far in the distance, doing battle with the constant sun emanating from within Marum Crater.


*****


*The Marum Crater Descent Expedition was led by Sam Cossman and generously funded by Kenu.

1) Nemeth and Cronin, 2011, Journal of Volcanology and Geothermal Research.

2) Siebert et al., 2010, Volcanoes of the World, 3rd edition.

3) Personal communication, Bradley Ambrose