Comcast to Obama: We’ll Play Nice With Net Neutrality…Honestly


Comcast Corporation chairman & CEO Brian Roberts speaks at a Comcast presentation at the Contemporary Jewish Museum in San Francisco, Wednesday, Nov. 12, 2014.

Comcast chairman and CEO Brian Roberts speaks on Wednesday at the Contemporary Jewish Museum in San Francisco. Jeff Chiu/AP



Comcast CEO Brian Roberts may have come to San Francisco to show off a new service designed to make Comcast cable TV look and feel more like the internet. But he knew that the Silicon Valley press corps would be more interested in something else.


“Something like Title II, maybe?” he said, before launching into his extended demo of X1, the internet-based version of cable that Comcast is starting to roll out to US customers. Title II is a once-obscure section of a 1934 law that has become the center of the battle over the future of the internet and that elusive ideal known as net neutrality.


On Monday, the White House released a statement in which President Obama expressed support for reclassifying broadband as a common carrier under Title II, making internet access more like long distance or mobile phone service. In the president’s view, reclassification is what’s needed to ensure net neutrality—an open internet where all traffic is treated equally. But Comcast sees things a little differently.


Roberts believes the debate has been framed in the wrong way. People assume that if you’re for net neutrality, you’re also Title II, he says. But he sees them a very separate things. Comcast believes in net neutrality, but not in Title II, he says, arguing that Title II would slow the expansion of the internet.


For the major broadband providers, the president’s statement was a gauntlet thrown, a direct challenge to their claims that they can be trusted on their own not to demand payment for preferential treatment on their networks. Net neutrality advocates don’t trust them at all, arguing that without strong protections against fast lanes and metered traffic, smaller online players will get pushed aside—and, with them, competition and innovation. Roberts’ and Comcast’s task is to persuade the public otherwise, that the company does support an open internet and doesn’t need to have its broadband service reclassified under Title II to ensure that support.


To make its case, Comcast, the country’s biggest media company, pretty much has just one argument it can make: that the status quo is working out fine. “We’ve had 20 years of a set of rules that have built, I think, this wonderful world that we all enjoy,” Roberts said.


The More Things Change


This is not to say that Comcast doesn’t claim to support any new rules at all. The company says it backs the FCC creating “new, strong Open Internet rules” that include no blocking, throttling, or “paid prioritization” of traffic. But in voicing that support, Comcast isn’t really calling for any kind of change, at least in its own practices, because it says it’s already doing all of those things.


Netflix may disagree, but no one is doubting that a reclassification under Title II would represent a fundamental change in how the government oversees Comcast’s business. And the problem with that change, in Roberts’ view, is that it could curtail the steady expansion of internet infrastructure into which companies like his are investing billions of dollars.


“We want to have those open internet rules. We want them to be enforceable. But we don’t want to discourage investing,” Roberts said. “We can’t find anything that Title II does to encourage investing.”


Uncertain Connections


Roberts didn’t go so far as AT&T CEO Randall Stephenson did on Wednesday. In a statement that responded to the president’s, Stephenson said AT&T was halting its build-out of high-speed internet connections in 100 US cities until the company knew what rules would govern those connections. But Roberts’ message was clear: “uncertainty” under Title II could mean less money spent on better connections.


Instead of courting that risk, the subtext of Roberts’ presentation seemed to be: don’t worry so much. We support an open internet. We’ve got a cool new interface for cable that integrates Twitter and IFTTT. It has voice control, new options for binge-watching, even sophisticated new audio navigation for blind users. Consumers want TV to work more like the Internet, and we listened. You want innovation? Here it is.


More change than that, it seems, is not the kind Comcast can believe in.



Amazon Embraces Docker, Following Google and Microsoft’s Lead


The next big thing in cloud computing just found a prime spot on the world’s largest cloud computing company.


On Thursday, Amazon announced a new cloud service that helps software developers and businesses build and operate their online applications using Docker, a technology that aims to make online software significantly more efficient.


Microsoft is getting behind Docker in a big way, even promising to offer something similar in the next version of Windows. Google just rolled out a cloud service that seeks to facilitate the use of Docker. And now, at its annual cloud computing conference in Las Vegas, Amazon has announced a similar tool called the EC2 Container Service.


You can think of Docker as a shipping container for software running on the internet. These containers make it easier to move software from machine to machine and run it more efficiently on each one. This is vitally important in an age when online software runs across tens, hundreds, or even thousands of machines.


Every major cloud computing company—from newcomers like Digital Ocean to old school providers like Rackspace and Joyent—is now embracing Docker. Amazon has long allowed developers to run Docker containers on its primary cloud computing service, EC2, a means of running software without setting up your own computer servers. But the new EC2 Container Service represents an even deeper commitment to the technology.


Containers are a bit like virtual machines in that they make it possible to bundle up several different pieces of software and run them on a single server without them interfering with each other. But unlike a traditional virtual machine created with tools from companies like VMware, a Docker container doesn’t require its own operating system. That means you can pack far more containers onto a single server than you could virtual machines.


The trouble, Amazon Web Services CTO Warner Vergels explained at the conference, is that it’s still hard to manage large numbers of containers. Figuring out how to pack all all these containers on each server in order to make the best use of that server’s resources can be a bit like playing Tetris. The EC2 Container Service aims to ease that burden by automatically optimizing the placement of each container.


One of the most important aspect of the service is the ability to easily deploy containers across all of Amazon’s various data centers, says Docker’s senior vice president of product Scott Johnson. Amazon allows customers to select from different several geographically separated server farms—called “availability zones.”


Customers can run applications in more than one availability zone so that if one zone goes down the application will stay up. The new container service will make it far easier to create this sort of resilient architecture.


The big thing still missing is, however, is a way to manage containers across different cloud providers. Amazon’s new service is similar to Google’s open source Kubernetes, which is the foundation of its Docker service, known as Google Container Engine, or Mesos, an open source clone of Google’s internal technologies used at companies like Twitter. But it will only work on Amazon’s own servers.


You can use the same Docker container on Amazon that you use on Google’s cloud service, Johnson explains. But although the containers themselves don’t vary, the tools themselves do. Johnson says that Docker has a plan to deal with this that will be announced next month.



Amazon Resolves Dispute With Top-Five Publisher Hachette Over Book Sales


Amazon Kindle. Photo: Josh Valcarcel/WIRED

Amazon Kindle. Photo: Josh Valcarcel/WIRED



Amazon and Hachette—one of the country’s top five book publishers—have resolved their long and bitter battle over book pricing on the internet, and the deal appears to be a big win for authors as well as publishers at a time when the internet is driving down the price of books.


Hachette says it can now set the prices on the ebooks it publishes on Amazon—something that Amazon, one of the biggest and most influential book sellers on the net, had sought to block in the past. “The new agreement will benefit Hachette authors for years to come,” Hachette CEO Michael Pietsch said in a statement. “It gives Hachette enormous marketing capability with one of our most important bookselling partners.”


On Thursday morning, Amazon and Hachette announced that they have reached a new multi-year agreement for both ebook and print book sales. The two companies did not disclose the details of the deal, but both sides say they are amenable to the terms. The new ebook terms will go into effect early next year. Amazon, which had stopped selling some of Hachette’s titles and forcibly delayed the delivery of others, will reinstate the book publisher’s entire catalog.


Amazon controls around 50 percent—or more—of all U.S. book sales according to estimates. Though it didn’t create the first e-reader on the market, it was also a major player to build the ebook business in the U.S. after its hugely successful launch of the first Kindle back in 2007. But after the launch of Apple’s iBooks, with support from five of the world’s six largest publishers, Amazon’s market share took a dive.


“We are pleased with this new agreement,” reads a statement from Amazon in the wake the Hachette deal. “It includes specific financial incentives for Hachette to deliver lower prices, which we believe will be a great win for readers and authors alike.”



Comcast Wants the President to Know That Net Neutrality Is Doing Just Fine


Comcast Corporation chairman & CEO Brian Roberts speaks at a Comcast presentation at the Contemporary Jewish Museum in San Francisco, Wednesday, Nov. 12, 2014.

Comcast chairman and CEO Brian Roberts speaks on Wednesday at the Contemporary Jewish Museum in San Francisco. Jeff Chiu/AP



Comcast CEO Brian Roberts may have come to San Francisco to show off a new service designed to make Comcast cable TV look and feel more like the internet. But he knew that the Silicon Valley press corps would be more interested in something else.


“Something like Title II, maybe?” he said, before launching into his extended demo of X1, the internet-based version of cable that Comcast is starting to roll out to US customers. Title II is a once-obscure section of a 1934 law that has become the center of the battle over the future of the internet and that elusive ideal known as net neutrality.


On Monday, the White House released a statement in which President Obama expressed support for reclassifying broadband as a common carrier under Title II, making internet access more like long distance or mobile phone service. In the president’s view, reclassification is what’s needed to ensure net neutrality—an open internet where all traffic is treated equally. But Comcast sees things a little differently.


Roberts believes the debate has been framed in the wrong way. People assume that if you’re for net neutrality, you’re also Title II, he says. But he sees them a very separate things. Comcast believes in net neutrality, but not in Title II, he says, arguing that Title II would slow the expansion of the internet.


For the major broadband providers, the president’s statement was a gauntlet thrown, a direct challenge to their claims that they can be trusted on their own not to demand payment for preferential treatment on their networks. Net neutrality advocates don’t trust them at all, arguing that without strong protections against fast lanes and metered traffic, smaller online players will get pushed aside—and, with them, competition and innovation. Roberts’ and Comcast’s task is to persuade the public otherwise, that the company does support an open internet and doesn’t need to have its broadband service reclassified under Title II to ensure that support.


To make its case, Comcast, the country’s biggest media company, pretty much has just one argument it can make: that the status quo is working out fine. “We’ve had 20 years of a set of rules that have built, I think, this wonderful world that we all enjoy,” Roberts said.


The More Things Change


This is not to say that Comcast doesn’t claim to support any new rules at all. The company says it backs the FCC creating “new, strong Open Internet rules” that include no blocking, throttling, or “paid prioritization” of traffic. But in voicing that support, Comcast isn’t really calling for any kind of change, at least in its own practices, because it says it’s already doing all of those things.


Netflix may disagree, but no one is doubting that a reclassification under Title II would represent a fundamental change in how the government oversees Comcast’s business. And the problem with that change, in Roberts’ view, is that it could curtail the steady expansion of internet infrastructure into which companies like his are investing billions of dollars.


“We want to have those open internet rules. We want them to be enforceable. But we don’t want to discourage investing,” Roberts said. “We can’t find anything that Title II does to encourage investing.”


Uncertain Connections


Roberts didn’t go so far as AT&T CEO Randall Stephenson did on Wednesday. In a statement that responded to the president’s, Stephenson said AT&T was halting its build-out of high-speed internet connections in 100 US cities until the company knew what rules would govern those connections. But Roberts’ message was clear: “uncertainty” under Title II could mean less money spent on better connections.


Instead of courting that risk, the subtext of Roberts’ presentation seemed to be: don’t worry so much. We support an open internet. We’ve got a cool new interface for cable that integrates Twitter and IFTTT. It has voice control, new options for binge-watching, even sophisticated new audio navigation for blind users. Consumers want TV to work more like the Internet, and we listened. You want innovation? Here it is.


More change than that, it seems, is not the kind Comcast can believe in.



Bacteria become 'genomic tape recorders', recording chemical exposures in their DNA

MIT engineers have transformed the genome of the bacterium E. coli into a long-term storage device for memory. They envision that this stable, erasable, and easy-to-retrieve memory will be well suited for applications such as sensors for environmental and medical monitoring.



"You can store very long-term information," says Timothy Lu, an associate professor of electrical engineering and computer science and biological engineering. "You could imagine having this system in a bacterium that lives in your gut, or environmental bacteria. You could put this out for days or months, and then come back later and see what happened at a quantitative level."


The new strategy, described in the Nov. 13 issue of the journal Science, overcomes several limitations of existing methods for storing memory in bacterial genomes, says Lu, the paper's senior author. Those methods require a large number of genetic regulatory elements, limiting the amount of information that can be stored.


The earlier efforts are also limited to digital memory, meaning that they can record only all-or-nothing memories, such as whether a particular event occurred. Lu and graduate student Fahim Farzadfard, the paper's lead author, set out to create a system for storing analog memory, which can reveal how much exposure there was, or how long it lasted. To achieve that, they designed a "genomic tape recorder" that lets researchers write new information into any bacterial DNA sequence.


Stable memory


To program E. coli bacteria to store memory, the MIT researchers engineered the cells to produce a recombinase enzyme, which can insert DNA, or a specific sequence of single-stranded DNA, into a targeted site. However, this DNA is produced only when activated by the presence of a predetermined molecule or another type of input, such as light.


After the DNA is produced, the recombinase inserts the DNA into the cell's genome at a preprogrammed site. "We can target it anywhere in the genome, which is why we're viewing it as a tape recorder, because you can direct where that signal is written," Lu says.


Once an exposure is recorded through this process, the memory is stored for the lifetime of the bacterial population and is passed on from generation to generation.


There are a couple of different ways to retrieve this stored information. If the DNA is inserted into a nonfunctional part of the genome, sequencing the genome will reveal whether the memory is stored in a particular cell. Or, researchers can target the sequences to alter a gene. For example, in this study, the new DNA sequence turned on an antibiotic resistance gene, allowing the researchers to determine how many cells had gotten the memory sequence by adding antibiotics to the cells and observing how many survived.


By measuring the proportion of cells in the population that have the new DNA sequence, researchers can determine how much exposure there was and how long it lasted. In this paper, the researchers used the system to detect light, a lactose metabolite called IPTG, and an antibiotic derivative called aTc, but it could be tailored to many other molecules or even signals produced by the cell, Lu says.


The information can also be erased by stimulating the cells to incorporate a different piece of DNA in the same spot. This process is currently not very efficient, but the researchers are working to improve it.


"This work is very exciting because it integrates many useful capabilities in a single system: long-lasting, analog, distributed genomic storage with a variety of readout options," says Shawn Douglas, an assistant professor at the University of California at San Diego who was not involved in the study. "Rather than treating each individual cell as a digital storage device, Farzadfard and Lu treat an entire population of cells as an analog 'hard drive,' greatly increasing the total amount of information that can be stored and retrieved."


Bacterial sensors


Environmental applications for this type of sensor include monitoring the ocean for carbon dioxide levels, acidity, or pollutants. In addition, the bacteria could potentially be designed to live in the human digestive tract to monitor someone's dietary intake, such as how much sugar or fat is being consumed, or to detect inflammation from irritable bowel disease.


These engineered bacteria could also be used as biological computers, Lu says, adding that they would be particularly useful in types of computation that require a lot of parallel processing, such as picking patterns out of an image.


"Because there are billions and billions of bacteria in a given test tube, and now we can start leveraging more of that population for memory storage and for computing, it might be interesting to do highly parallelized computing. It might be slow, but it could also be energy-efficient," he says.


Another possible application is engineering brain cells of living animals or human cells grown in a petri dish to allow researchers to track whether a certain disease marker is expressed or whether a neuron is active at a certain time. "If you could turn the DNA inside a cell into a little memory device on its own and then link that to something you care about, you can write that information and then later extract it," Lu says.


The research was funded by the National Institutes of Health, the Office of Naval Research, and the Defense Advanced Research Projects Agency.



How Campylobacter exploits chicken 'juice' highlights need for hygiene

A study from the Institute of Food Research has shown that Campylobacter's persistence in food processing sites and the kitchen is boosted by 'chicken juice.'



Organic matter exuding from chicken carcasses, "chicken juice," provides these bacteria with the perfect environment to persist in the food chain. This emphasises the importance of cleaning surfaces in food preparation, and may lead to more effective ways of cleaning that can reduce the incidence of Campylobacter.


The study was led by Helen Brown, a PhD student supervised by Dr Arnoud van Vliet at IFR, which is strategically funded by the Biotechnology and Biological Sciences Research Council. Helen's PhD studentship is co-funded by an industrial partner, Campden BRI.


The researchers collected the liquids produced from defrosting chickens, and found that this helped Campylobacter attach to surfaces and subsequently form biofilms. Biofilms are specialised structures some bacteria form on surfaces that protect them from threats from the environment.


"We have discovered that this increase in biofilm formation was due to chicken juice coating the surfaces we used with a protein-rich film," said Helen Brown. "This film then makes it much easier for the Campylobacter bacteria to attach to the surface, and it provides them with an additional rich food source."


Campylobacter aren't particularly hardy bacteria, so one area of research has been to understand exactly how they manage to survive outside of their usual habitat, the intestinal tract of poultry. They are sensitive to oxygen, but during biofilm formation the bacteria protect themselves with a layer of slime. This also makes them more resistant to antimicrobials and disinfection treatments


Understanding this and how Campylobacter persists in the food production process will help efforts to reduce the high percentage of chickens that reach consumers contaminated with the bacteria. Although thorough cooking kills off the bacteria, around 500,000 people suffer from Campylobacter food poisoning each year in the UK. Reducing this number, and the amount of infected chicken on supermarket shelves, is now the number one priority of the Food Standards Agency.


"This study highlights the importance of thorough cleaning of food preparation surfaces to limit the potential of bacteria to form biofilms," said Helen.




Story Source:


The above story is based on materials provided by Norwich BioScience Institutes . Note: Materials may be edited for content and length.



Following Google and Microsoft, Amazon Embraces the Next Big Thing in Cloud Computing



The view from the headquarters of Amazon Web Services, in downtown Seattle. Photo: Wired/Mike Kane



The next big thing in cloud computing just found a prime spot on the world’s largest cloud computing company.


On Thursday, Amazon announced a new cloud service that helps software developers and businesses build and operate their online applications using Docker, a technology that aims to make online software significantly more efficient.


Microsoft is getting behind Docker in a big way, even promising to offer something similar in the next version of Windows. Google just rolled out a cloud service that seeks to facilitate the use of Docker. And now, at its annual cloud computing conference in Las Vegas, Amazon has announced a similar tool called the EC2 Container Service.


You can think of Docker as a shipping container for software running on the internet. These containers make it easier to move software from machine to machine and run it more efficiently on each one. This is vitally important in an age when online software runs across tens, hundreds, or even thousands of machines.


Every major cloud computing company—from newcomers like Digital Ocean to old school providers like Rackspace and Joyent—is now embracing Docker. Amazon has long allowed developers to run Docker containers on its primary cloud computing service, EC2, a means of running software without setting up your own computer servers. But the new EC2 Container Service represents an even deeper commitment to the technology.


Containers are a bit like virtual machines in that they make it possible to bundle up several different pieces of software and run them on a single server without them interfering with each other. But unlike a traditional virtual machine created with tools from companies like VMware, a Docker container doesn’t require its own operating system. That means you can pack far more containers onto a single server than you could virtual machines.


The trouble, Amazon Web Services CTO Warner Vergels explained at the conference, is that it’s still hard to manage large numbers of containers. Figuring out how to pack all all these containers on each server in order to make the best use of that server’s resources can be a bit like playing Tetris. The EC2 Container Service aims to ease that burden by automatically optimizing the placement of each container.


One of the most important aspect of the service is the ability to easily deploy containers across all of Amazon’s various data centers, says Docker’s senior vice president of product Scott Johnson. Amazon allows customers to select from different several geographically separated server farms—called “availability zones.”


Customers can run applications in more than one availability zone so that if one zone goes down the application will stay up. The new container service will make it far easier to create this sort of resilient architecture.


The big thing still missing is, however, is a way to manage containers across different cloud providers. Amazon’s new service is similar to Google’s open source Kubernetes, which is the foundation of its Docker service, known as Google Container Engine, or Mesos, an open source clone of Google’s internal technologies used at companies like Twitter. But it will only work on Amazon’s own servers.


You can use the same Docker container on Amazon that you use on Google’s cloud service, Johnson explains. But although the containers themselves don’t vary, the tools themselves do. Johnson says that Docker has a plan to deal with this that will be announced next month.



Amazon Resolves Dispute With Top-Five Publisher Hachette Over Book Sales


Amazon Kindle. Photo: Josh Valcarcel/WIRED

Amazon Kindle. Photo: Josh Valcarcel/WIRED



Amazon and Hachette—one of the country’s top five book publishers—have resolved their long and bitter battle over book pricing on the internet, and the deal appears to be a big win for authors as well as publishers at a time when the internet is driving down the price of books.


Hachette says it can now set the prices on the ebooks it publishes on Amazon—something that Amazon, one of the biggest and most influential book sellers on the net, had sought to block in the past. “The new agreement will benefit Hachette authors for years to come,” Hachette CEO Michael Pietsch said in a statement. “It gives Hachette enormous marketing capability with one of our most important bookselling partners.”


On Thursday morning, Amazon and Hachette announced that they have reached a new multi-year agreement for both ebook and print book sales. The two companies did not disclose the details of the deal, but both sides say they are amenable to the terms. The new ebook terms will go into effect early next year. Amazon, which had stopped selling some of Hachette’s titles and forcibly delayed the delivery of others, will reinstate the book publisher’s entire catalog.


Amazon controls around 50 percent—or more—of all U.S. book sales according to estimates. Though it didn’t create the first e-reader on the market, it was also a major player to build the ebook business in the U.S. after its hugely successful launch of the first Kindle back in 2007. But after the launch of Apple’s iBooks, with support from five of the world’s six largest publishers, Amazon’s market share took a dive.


“We are pleased with this new agreement,” reads a statement from Amazon in the wake the Hachette deal. “It includes specific financial incentives for Hachette to deliver lower prices, which we believe will be a great win for readers and authors alike.”



Why Lowering NYC’s Speed Limit by Just 5 MPH Can Save a Lot of Lives


Cars are viewed on a Manhattan Street on November 7, 2014 in New York City.

Cars are viewed on a Manhattan Street on November 7, 2014 in New York City. Spencer Platt/Getty Images



Last week, New York City officially lowered its default speed limit, from the standard 30 mph to 25. That difference may seem arbitrary and hardly worth noting, but it actually makes a big difference when it comes to saving lives. For one, cars going more a bit slowly will have an easier time avoiding crashes in the first place. But the real difference is a huge jump in pedestrian survival rates when crashes do happen: The laws of physics and human anatomy translate to 30 mph being far deadlier than 25 mph. The difference in velocity translates to the car lifting pedestrians off the ground, and creating traumatic impacts against vital areas like the head.


“I’d estimate that a person is about 74 percent more likely to be killed if they’re struck by vehicles traveling at 30 mph than at 25 mph,” says Brian Tefft, a researcher with the AAA Foundation for Traffic Safety who wrote a 2011 report on the subject. He looked at 549 vehicle-pedestrian accidents occurring across the US between 1994 and 1998, accounting for factors like vehicle size and pedestrian BMI. The risk of serious injury (defined as likely to result in long-term disability) for a pedestrian hit at 23 mph was about 25 percent. At 39 mph, it jumped to 75 percent. Analyzing his findings, Tefft says, “25 to 35 mph, they’re almost three times as likely to be killed.” 35 mph, he found, was the median impact speed for fatal pedestrian crashes.


A 2010 study in London had similar findings: “In all of the pedestrian datasets, the risk of fatality increases slowly until impact speeds of around 30 mph. Above this speed, risk increases rapidly – the increase is between 3.5 and 5.5 times from 30 mph to 40 mph,” the author, D.C. Richards, writes.


So why doesn’t a 20 percent change in speed just mean a 20 percent change in serious injuries? There are lots of variables at work here (is the car an Escalade or a Fiat? is it a direct hit or a side swipe?), but, it turns out, the 30 mph mark is something of a limit for what our bodies can live through. Above that speed, organs and the skull aren’t necessarily strong enough to withstand the kinetic impact of a bumper and windshield.


“It has to do with fracture forces,” says Dr. Peter Orner, a licensed physician and former engineering professor who consults on injury biomechanics in car crashes. “As velocity increases, you’re crossing thresholds.” Though he’s skeptical of the comprehensiveness of studies like Tefft’s, Orner also says that at higher speeds, “the car is going to scoop them up.” And when you’re talking about cars, what gets scooped up is usually smacked against a windshield or thrown onto the ground. That can easily lead to brain trauma.


The good news is that cars have gotten safer over the past few years, and federal regulations designed to protect pedestrians in the event of a crash are making a difference. But then throw in rise of distracted driving and yes, distracted walking, which has led to a recent spike in pedestrian injuries, including those not involving automobiles. So it seems like a good idea for NYC to make everyone drive a bit more slowly.


Whether the speed limit change will make for safer roads is up for debate. “If the actual [car] speeds are reduced in response to the change in the speed limit,” Tefft says, “it should have a safety benefit.” But that’s not always the case. “In general,” he says, “the research shows that it takes more than the number on the sign to change the speeds of traffic.”



Incredible New Photos Taken From the Surface of a Comet




Editor’s Note: We will update this gallery with new images as they become available.


For millennia, people have seen comets come and go from afar, watching the mysterious, bright objects suddenly appear in the sky with long, spectacular tails. Now the Rosetta mission has provided an unprecedented close-up perspective. The spacecraft’s images of comet 67P/Guryumov-Gerasimenko’s surface reveal a rugged environment, covered with jagged rock and sharp cliffs. Now its lander, Philae, has snapped the first-ever photos from the surface of a comet.


The photos in this gallery include those first shots, as well as photos taken of the lander’s descent by both Philae and Rosetta. Alos included are some gorgeous images of the comet taken during Rosetta’s reconnaissance flyby at 10 kilometers above the surface.


It’s dark out in space, especially at the comet’s current location about 278 million miles from the sun. The comet itself is blacker than coal. To highlight the features on the surface, the contrast in some of these black-and-white images has been enhanced.


The Rosetta spacecraft, which is orbiting the comet, is equipped with OSIRIS, a wide and narrow-angle camera. The Philae lander has two imaging instruments, a set of six cameras called ÇIVA that take surface panoramas, and another camera called ROLIS, which took the pictures of the comet during the descent and will help study the texture and fine structure of the comet’s surface.