Bar code devised for bacteria that causes tuberculosis

The bacteria that cause the deadly respiratory disease have evolved into families of strains, or lineages, which may affect people differently.

To help identify the different origins and map how tuberculosis moves around the world, spreading from person to person through the air, the research team studied over 90,000 genetic mutations.

According to the study -- published in Nature Communications -- the researchers found that just 62 mutations are needed to code the global family of strains.

Dr Taane Clark, Reader in Genetic Epidemiology and Statistical Genomics at the London School of Hygiene & Tropical Medicine, who led the study, said: "There is increasing interest in new technologies that can assist those treating tuberculosis patients.

"This new barcode can be easily implemented and used to determine the strain-type that is a surrogate for virulence.

"We are making this information available to the doctors and scientists working with tuberculosis so that they can more easily know what strains they are dealing with."

Dr Ruth McNerney, TB expert and Senior Lecturer in Pathogen Biology and Diagnostics at the School, who was also part of the study, said: "New technology is making it easier to track mutations but genomes are very complicated and we hope this simple bar code will help people with their research."

Tuberculosis is a bacterial disease that often involves the lungs but can affect any part of the body. Untreated it is often fatal and TB kills an estimated 1.3 million people every year.

The World Health Organization estimates there are 12 million TB patients in the world and in the UK nearly 9,000 new cases are diagnosed every year. The disease can be carried around the world by people unaware they are infected. The bacterium that causes TB is called Mycobacterium tuberculosis.

Story Source:

The above story is based on materials provided by London School of Hygiene & Tropical Medicine . Note: Materials may be edited for content and length.

Scientists create renewable fossil fuel alternative using bacteria

Researchers have engineered the harmless gut bacteria E.coli to generate renewable propane.

The development is a step towards commercial production of a source of fuel that could one day provide an alternative to fossil fuels.

Propane is an appealing source of cleaner fuel because it has an existing global market. It is already produced as a by-product during natural gas processing and petroleum refining, but both are finite resources. In its current form it makes up the bulk of LPG (liquid petroleum gas), which is used in many applications, from central heating to camping stoves and conventional motor vehicles.

In a new study, the team of scientists from Imperial College London and the University of Turku in Finland used Escherichia coli to interrupt the biological process that turns fatty acids into cell membranes. The researchers used enzymes to channel the fatty acids along a different biological pathway, so that the bacteria made engine-ready renewable propane instead of cell membranes.

Their ultimate goal is to insert this engineered system into photosynthetic bacteria, so as to one day directly convert solar energy into chemical fuel.

The results of the study are published in the journal Nature Communications.

Dr Patrik Jones, from the Department of Life Sciences at Imperial College London said: "Although this research is at a very early stage, our proof of concept study provides a method for renewable production of a fuel that previously was only accessible from fossil reserves. Although we have only produced tiny amounts so far, the fuel we have produced is ready to be used in an engine straight away. This opens up possibilities for future sustainable production of renewable fuels that at first could complement, and thereafter replace fossil fuels like diesel, petrol, natural gas and jet fuel."

The scientists chose to target propane because it can easily escape the cell as a gas, yet requires little energy to transform from its natural gaseous state into a liquid that is easy to transport, store and use.

"Fossil fuels are a finite resource and as our population continues to grow we are going to have to come up with new ways to meet increasing energy demands. It is a substantial challenge, however, to develop a renewable process that is low-cost and economically sustainable. At the moment algae can be used to make biodiesel, but it is not commercially viable as harvesting and processing requires a lot of energy and money. So we chose propane because it can be separated from the natural process with minimal energy and it will be compatible with the existing infrastructure for easy use" added Dr Jones.

Using E. coli as a host organism, the scientists interrupted the biological process that turns fatty acids into cell membranes. By stopping this process at an early stage they could remove butyric acid, a nasty smelling compound that is an essential precursor for propane production.

To interrupt the process, the researchers discovered a new variant of an enzyme called thioesterase which specifically targets fatty acids and releases them from the natural process. They then used a second bacterial enzyme, called CAR, to convert butyric acid into butyraldehyde. Finally, they added a recently discovered enzyme called aldehyde-deformylating oxygenase (ADO), which is known to naturally create hydrocarbons, in order to form propane.

Previous attempts to use the ADO enzyme have proved disappointing as scientists have been unable to harness the natural power of the enzyme to create cleaner fuel. But the scientists at Imperial discovered that by stimulating ADO with electrons they were able to substantially enhance the catalytic capability of the enzyme, and ultimately produce propane.

The level of propane that the scientists produced is currently one thousand times less than what would be needed to turn it into a commercial product, so they are now working on refining their newly designed synthetic process. Dr Jones said: "At the moment, we don't have a full grasp of exactly how the fuel molecules are made, so we are now trying to find out exactly how this process unfolds. I hope that over the next 5-10 years we will be able to achieve commercially viable processes that will sustainably fuel our energy demands."

This research was funded by a grant from the European Research Council.

Story Source:

The above story is based on materials provided by Imperial College London . Note: Materials may be edited for content and length.

War between bacteria, phages benefits humans

In the battle between our immune systems and cholera bacteria, humans may have an unknown ally in bacteria-killing viruses known as phages. In a new study, researchers from Tufts University, Massachusetts General Hospital, Partners In Health, Haiti's National Public Health Laboratory, and elsewhere, report that phages can force cholera bacteria to give up their virulence in order to survive. Importantly, the study -- published in eLife -- found that cholera's mutational escape from phage predation occurs during human infection.

First author Kimberley Seed, Ph.D., and corresponding author Andrew Camilli, Ph.D., both of Tufts University School of Medicine, and their co-authors analyzed phage resistance properties and DNA sequences of cholera bacteria taken from phage-positive stool samples from patients with cholera in Haiti and Bangladesh, two countries where cholera outbreaks are common at present.

They first determined that cholera bacteria from Haiti changed its DNA in order to fight phages. They compared the bacteria from Haiti to bacteria from Bangladesh collected over many years to determine if the changes were happening on multiple occasions in both countries or only in isolated groups or cases.

The research team discovered that across both time and geography, the cholera bacteria mutated during human infection in order to trade their virulence, or ability to persist and make a human sick, for the ability to defend against the phages. Alternatively, in some patients, the cholera bacteria mutated in a more conservative manner to retain virulence, yet sacrificed the ability to grow optimally in the environment. In either scenario, the cholera bacteria appear to have traded something important in order to survive the onslaught from phages.

"This is the first time we have seen cholera bacteria defend themselves from phages while infecting humans. This suggests that these phages are actively working in our favor, first by killing cholera bacteria within the patient, and second, by genetically weakening the bacteria that are shed by the infected patient such that they are less fit to survive in the environment or less able to cause infection in other people," said senior author Andrew Camilli, a Howard Hughes Medical Institute investigator, professor of molecular biology & microbiology at Tufts University School of Medicine, and member of the Molecular Microbiology program faculty at the Sackler School of Graduate Biomedical Sciences at Tufts University.

"This important finding suggests that we may be able to leverage the strength of phages for treating people with cholera or perhaps preventing cholera in people who may have been recently exposed as an alternative to antibiotics," he continued.

"Seeing this rapid evolutionary change in the cholera bacteria occurring during human infection suggests that the phages are posing a very strong threat. And to observe this in two different continents suggests that this is not a one-time find, but that it may be happening consistently during cholera outbreaks," said first author Seed, now assistant professor of molecular, cellular and developmental biology at University of Michigan. "Additionally, virtually all bacteria can be infected by phages, which are found wherever bacteria are. So this finding with cholera may be the start of a broader understanding of how phages and bacteria evolve."

Previous work by Camilli and Seed, published last year in Nature, provided the first evidence that a phage could acquire a wholly functional and adaptive immune system. They observed that the phage could use this acquired immune system to disarm a phage defense system of the cholera bacteria, allowing the phage to ultimately destroy its bacterial host. This study bolstered the concept of using phage to prevent or treat bacterial infections, and extended the idea that phages can be extremely sophisticated bacterial predators. The team is now investigating the details of this particular arms race between phage and bacteria in hopes of better understanding how phage influence cholera outbreaks and how we can further leverage phages to treat or prevent infections.

The World Health Organization reports that there are an estimated three-to five million cases of cholera cases and 100,000 to 120,000 deaths due to cholera each year. This summer, at least 67 people in Ghana have died of cholera while 6,000 others have been infected. In northern Cameroon, there are reports that 200 people have died and many more infected in the last few months. A current outbreak in South Sudan has taken 130 lives out of a total of more than 5,800 cases. In Haiti, since the beginning of the epidemic there (October 2010) and through March of this year, more than 8,500 people have died, out of more than 700,000 reported cases.

Story Source:

The above story is based on materials provided by Tufts University . Note: Materials may be edited for content and length.

Discovery reveals how bacteria distinguish harmful vs. helpful viruses

When they are not busy attacking us, germs go after each other. But when viruses invade bacteria, it doesn't always spell disaster for the infected microbes: Sometimes viruses actually carry helpful genes that a bacterium can harness to, say, expand its diet or better attack its own hosts.

Scientists have assumed the bacterial version of an immune system would robotically destroy anything it recognized as invading viral genes. However, new experiments at Rockefeller University have now revealed that one variety of the bacterial immune system known as the CRISPR-Cas system can distinguish viral foe from friend. And, the researchers report in a paper published August 31 in Nature, it does so by watching for one particular cue.

"Transcription -- an initial step in the process that reads genes, including those of viruses -- makes the difference," says researcher Luciano Marraffini, head of the Laboratory of Bacteriology. "The full genome of viruses in their lytic, or destructive phase, is transcribed. Meanwhile, a few of the genes from a virus are transcribed during its lysogenic, or dormant phase."

Viruses in their lytic phase make copies of themselves using a cell's machinery before destroying it to liberate these new viruses. Viruses in their lysogenic phase, meanwhile, quietly integrate into a host's genetic material. And this is where they offer their potential benefit to the bacteria, which co-opt viral genes for their own ends. In fact, some disease-causing microbes, such as the bacterium responsible for diphtheria, must pick up the right virus in order to attack humans.

Scientists have only discovered this adaptive bacterial immune system relatively recently. Its function relies on CRISPRs, sections of DNA that contain repeating sequences interspersed with unique sequences called spacers. (CRISPR stands for clustered regularly interspaced short palindromic repeats.) The spacer sequences match the sequences in the viral genetic code, making it possible for enzymes encoded by CRISPR-associated genes (Cas) to chop out single spacer sequences from the RNA transcribed from the CRISPR DNA. Other Cas enzymes then use these spacer sequences as guides to target invaders for destruction.

The system can adapt to new invaders by acquiring new spacer sequences to target them. Recently, CRISPR-Cas systems have attracted significant scientific attention because their ability to make precisely targeted cuts in DNA can be put to use to genetically engineer all types of cells.

"Our understanding of CRISPR-Cas systems remains in the early stages, but, so far, it has generally been thought they lack a sophisticated way of discriminating their targets. In other words, once they target something, it will be chopped up," says the study's lead author, graduate student Gregory Goldberg. "For the first time, our work has shown that a CRISPR-Cas system, one found in Staphylococcus bacteria, can detect whether or not a virus is in its destructive phase and poses an immediate threat."

Most previous work has focused on lytic viruses. However, Staphylococci host many viruses capable of entering a lysogenic phase. The researchers also uncovered a telling asymmetry in the Staphylococcal CRISPR system's ability to effectively target a sequence and its counterpart on two strands of complimentary DNA. They suspected this discrepancy arose because transcription proceeds in a single direction for most viral genes, meaning one of the two target strands is not transcribed.

"The big clue showed up when we isolated a mutant virus that managed to evade destruction. Sometimes viruses can do this through a mutation in a target sequence that prevents the system from identifying them. But when we sequenced the genome of this phage, we found a mutation in a region that promotes transcription instead," Goldberg says.

In a series of experiments, he and colleagues tested their hypothesis that the Staphylococcal CRISPR-Cas system, known as Type III-A, can tolerate an infection by a lysogenic virus, so long as the target sequences are not transcribed. They engineered a target sequence that would undergo transcription only in the presence of a specific chemical. As a result, the Type III-A CRISPR-Cas system only destroyed the target in the presence of this chemical.

"This discovery of a transcription requirement is likely to surprise many who work with these systems," Marraffini says. "Although we do not yet understand the mechanism behind it, we can say that the Type -III-A system is quite different from other CRISPR-Cas systems, of which there is a mysteriously large variety. Our discovery hints at the possibility that each CRISPR type and subtype recognizes and destroys its targets in different ways, each in tune with a particular bacterium's needs. If these different targeting mechanisms do exist, they could have important implications for biotechnology."

Story Source:

The above story is based on materials provided by Rockefeller University . Note: Materials may be edited for content and length.

Listen to Chubby Checker Sing About Dig Dug On Unearthed Demo Tape

dig dug chubby 660


Let’s take a moment and listen to this song about Dig Dug, as sung by Chubby Checker.

Uploaded to Soundcloud earlier this week by user Matt Osborne, the rendition of the throwback rock ‘n roll number performed by the father of “The Twist” seems to have been recorded for use in a commercial for the arcade game, developed by Namco and released in the U.S. by Atari in 1982.

“My father, Don Osborne, was Vice President of ATARI at the time and he brought this home one day for us to listen to,” Osborne said, as reported on game historian Frank Cifaldi’s new Video Game Preservation Dump blog. “I remember my father being extra excited that Chubby was involved in the project and had great things to say about having met him.”

Checker’s delivery of the song is significantly more passionate and soulful than one would expect from the soundtrack to a ridiculous television commercial. Actually, I can’t stop replaying it.

Osborne says he doesn’t know why Checker’s vocals were not used in the final commercial, but speculates that it may be that the 50′s legend no longer appealed to a youthful arcade-going audience.

The Celebrity Photo Hacks Couldn’t Have Come at a Worse Time for Apple

The media crush will soon descend on Cupertino, California, as Apple prepares to announce what will surely be its newest iPhone, quite probably its latest laptops, and possibly its first smartwatch. When the new devices arrive next week, they’ll be tied together with an Apple operating system more dependent on the company’s cloud services than ever before. And as the world saw over the weekend, those cloud services might be about as secure as leaving your front door key under the mat.

The exact methods that led to the apparent breaches and theft of photos from celebrity iCloud accounts hasn’t been confirmed. Apple says the attacks were “very targeted” at specific user accounts, not the iCloud or Find My iPhone systems as a whole. But for the average user, how it was done is really beside the point. The message the world is hearing is that if it’s that easy to hack Jennifer Lawrence’s iCloud account, it’s probably that easy to hack mine too. For a company about to ask its users to entrust an even greater portion of their digital lives to its cloud, that’s the last thought Apple wants on anyone’s mind.

If all of this seems like a side note to the new features Apple nerds really care about, like a bigger iPhone screen, it’s not.

To refresh: Back in July, in a keynote widely hailed as the return of the company’s mojo, Apple announced a major refresh of iOS that would tie together users’ iPhones, iPads, and MacBooks more closely via the cloud. Among the most obvious was iCloud Drive, a Dropbox clone primed to finally make iCloud’s backup and syncing features less esoteric. Even more significantly, Apple released CloudKit, a set of tools for developers to build all kinds of third-party apps on top of iCloud, which let them outsource such tasks as data storage, syncing, and user authentication to Apple.

This last feature is especially unnerving in light of the celebrity photo hacks, since it allows access to other apps based on a user’s Apple ID —the same ID that would seem to have been compromised to gain entry to celebrities’ iCloud accounts. Regardless, once the account is compromised, whatever data it’s storing is likely compromised too.

All About the Cloud

If all of this seems like a side note to the new features Apple nerds really care about, like a bigger iPhone screen, it’s not. As Andreessen Horowitz’s Ben Evans noted after the Worldwide Developer’s Conference in July, “iOS 8 is really iOS 2.0,” and that next generation of Apple’s flagship mobile operating system is all about the cloud.

“For Apple, a lot of iOS 8 is about removing reasons to use the web at all, pulling more and more of the cloud into apps,” Evans writes. As Google stitches the web ever more tightly into Android’s fabric, Apple is embracing a closed system powered by its private cloud. And it’s that cloud that’s just become harder to trust.

In response to the hacks, Apple says it’s investigating. But as we at WIRED know all too well, this hardly the first time an iCloud account has been hacked with disastrous consequences. If Apple were a startup like Dropbox, such a high-profile theft of users’ sensitive, private data could spell the difference between success and failure as a company. If iCloud really is this poorly locked down, Apple starts to look like a company that has so much money it feels it can afford to be complacent.

Someone Finally Made the Glorious Loki Movie You’ve Been Waiting For

There are few villains in the Marvel Cinematic Universe more beloved than Loki. Actually, probably none. (Just ask, like, anyone on Tumblr.) Devotion to the Asgardian mischief-maker—or to Tom Hiddleston, who plays him—has reached such proportions that many fans would probably fight Frost Giants to get Marvel to make a stand-alone Loki flick. Even then, Marvel may not grant that wish. In the interim, fans have Loki: Brother of Thor.

The movie (above) combines footage from Thor, The Avengers, and Thor: The Dark World (and even some deleted scenes and a snippet of Guardians of the Galaxy) to create a Loki-view’s story. “My goal was to chronicle the character development of Loki into a single narrative,” Vimeo user “Loki Odinson” (an alias burdened with glorious purpose) writes in the clip’s description. “You’ll notice I took out a lot of fluff and even some beloved fight scenes, only because I was trying to focus the film on Loki and his relationships.”

That sound you hear is half of the internet calling in sick today. Pop some popcorn, Hiddlestoners—this one is for you.

Google Reboots Its Business Software Operation as ‘Google for Work’

Amit Singh spent 20 years at Oracle. But left for the clouds.

Amit Singh, president of the newly re-christened Google for Work. courtesy Amit Singh/Twitter

Google is best known for the online services and software it offers to everyday consumers—things like Google Search, Gmail, and the Android mobile operating system that runs so many smartphones and tablets—but for more than a decade, the tech giant has also offered services, software, and even hardware to the world’s businesses, including everything from online applications such as Google Docs to sweeping cloud computing services such as Google Compute Engine. Today, the company unveiled a new identity for this growing part of its operation. It will now be known as Google for Work.

This group—which operates across Google’s larger organization, essentially turning existing consumer products into business tools—was previously known as Google Enterprise. Meant to provide a shot in the arm for Google’s efforts in the workplace, the new name reflects a larger change across the world of business software and hardware, where so many tools are finding their way into businesses through individual employees rather than dedicated IT workers. It’s known as the “consumerization of IT.”

“In many ways, work itself has changed in the last five years,” Amit Singh, the president of the re-christened group, said this morning during a briefing with reporters at Google’s San Francisco. “Mobile has come into play, and users are making choices, not just enterprise IT.”

The name may take a while to stick, even inside Google. Rajen Sheth, the “father of Google Apps,” who has worked with the group for a good ten years, mistakenly used the group’s old name during the press briefing in San Francisco. But the ultimate aim is to make it easier for the average person to understand Google’s efforts in this area.

All of the group’s products will be tagged with the “for Work” moniker. Earlier this year, the company introduced Google Drive for Work, a version of its online file storage service that’s intended for businesses, and now, all other Google business tools will follow suit. Google Apps, for instance, will become Google Apps for Work.

According to Singh, 60 percent of the Fortune 500 now paying for what are now called Google for Work services, and more than 1,800 customers are signing up for its latest product, Google drive for Work, each week. But the company believes its business tools provide a much larger opportunity for growth, and that’s one of the reasons it’s rolling out this new moniker. “We are in a very important phase of growth,” Singh said of Google as a whole.

Sheth says that this move isn’t just a change in brand. “What we’re looking at here is more than just a name change,” he said. “It’s a mindset shift.” In short, the company realizes this is quickly becoming a “user-first market,” and it wants to make an even great effort to appeal to those end users. The word “enterprise,” you see, doesn’t mean that much to the average user. But “work” means a great deal.

The Bridge Is Over: Sonos Adds Simpler Wi-Fi Setup to All Its Speakers

Sonos Bridge

You no longer need a Bridge to stream music to your Sonos speakers. Sonos

Can’t decide what song to listen to on your Sonos speakers today? You should start with “The Bridge Is Over” by Boogie Down Productions. That’s because you won’t need the $50 Sonos Bridge to stream music to the company’s speakers anymore—but you may still want to use one in many cases.

Sonos just announced a firmware update that eliminates the need for the Bridge, which had to be physically connected to a router with an Ethernet cable for any Sonos system to work. Now, you can connect to one or more Sonos speakers directly via Wi-Fi, with no hard wired connection. During configuration, a speaker will form an ad-hoc connection with your mobile device. You can set up one of the speakers to act as a wireless bridge for multi-speaker setups, although there are some limitations as compared to a Bridge setup.

The free over-the-air update will go out today, and the new feature is backwards-compatible. All new Sonos speakers will come with the new firmware, and the update is also being pushed out to all older Sonos systems. You’ll be able to choose between a “Standard Setup”—the new way that just uses Wi-Fi—and a “Bridge Setup” that uses the traditional wired-in hub.

According to Nick Millington, vice president of product development at Sonos, the Wi-Fi setup won’t impact performance. Millington says that network reliability and synchronization between speakers won’t be issues, and you’ll get “95 percent-plus” of the performance of a Bridge-equipped system. However, there are still scenarios in which a Bridge will still be the best route.

If you’ve already got a Sonos setup with a Bridge in place, you will likely want to keep it that way. Although the Wi-Fi connectivity is a simpler way to configure a single-room or single-speaker system, Sonos says that the Bridge is still the best way to drive more-elaborate and farther-reaching setups.

For the Wi-Fi-only setup, all speakers will need to be in range of your Wi-Fi router, which means you are limited in terms of speaker placement. And although the Wi-Fi feature will work with the company’s Playbar soundbar by itself, a hardwired Bridge is still required for 5.1- and 3.1-channel Sonos home-theater setups.

We haven’t had any hands-on time with the new “Standard Setup” feature, but it’s a welcome option especially for users of the compact and affordable Play:1. You won’t need a separate piece of hardware to stream music to it, and one less wire and one less gadget are generally good things.

No One Tweets Like the Japanese, and That Was a Huge Problem for Twitter


Twitter engineers Ali Alzabarah (left) and Mazdak Hashemi pose among the mysterious colored deer that decorate the main hall at the company’s offices in San Francisco. Alex Washburn / WIRED

Twitter engineer Mazdak Hashemi says the Japanese tweet like no one else on earth.

When the New Year arrives or even as they watch certain moments in shows and movies broadcast on national television, tens of thousands of Japanese will tweet at practically the same instant. “Everyone tweets at the New Year, but the Japanese are more in-sync,” says Hashemi, who, as Twitter’s director of site reliability engineering, works to make sure its mini-messaging service stays in good working order. “They do it at exactly midnight.”

This provides a small window into the unique culture of the Japanese, known for exhibiting a certain type of conformity, but there was a time when it was also an enormous problem for Twitter. As the year 2012 arrived in Japan, the country’s synchronized tweets crashed the entire site, worldwide. It was 3pm in Britain when the site went belly-up.

‘Everyone tweets at the New Year, but the Japanese are more in-sync. They do it at exactly midnight.’

So, as the next New Year approached, Raffi Krikorian, one of Twitter’s lead engineers, urged Hashemi to find a better way of ensuring the site could handle the next wave of synchronized Japanese tweets. “I think he had some post-traumatic stress,” Hashemi says of Krikorian in the wake of the 2012 New Year. As a result, Hashemi and his team built a new system—known as a software “framework,” in engineering speak—that would let them mimic events like a Japanese New Year tweet storm and actually run these synthetic creations on the thousands of computers that run the live the site.

Internet engineers call it “stress testing,” and though this sort of thing is very common, Twitter’s situation was a bit different, and its methods could serve as a model for other online operations as they reach Twitter-like sizes. Because of the real-time nature of the site—where people expect to send and receive instantly, at all times—Hashemi and his team needed tools that could very carefully shape and reshape these massive tests, and because the service is used in this real-time way across the globe—it spans 240 million users who generate about about 5,700 tweets a second—there weren’t “off hours” when they could run live tests without having to worry about massive amounts of “real” traffic.

“We can’t test outside of business hours,” says Ali Alzabarah, who works alongside Hashemi. “We don’t have business hours.”

The tests Hashemi wanted to run were so large—larger than the real traffic storm that brought down the site during the last New Year—some engineers at Twitter didn’t even want him to try them. “They thought I was smoking something,” says Hashemi, who describes the company’s wider testing efforts in a blog post published today. “You’re pretty much putting your job on the line. It’s like: ‘Am I going to be here or not?’”

But the stress testing framework he and his team built also included new monitoring tools that would let closely track the results of the tests—on a second-by-second basis—and scale them back as need be. In the end, these tests proved very successful—and the site stayed up for the next New Year, and the one after that. Last August, it also held firm when the Japanese helped set a new tweets-per-second record as they all tweeted at the arrival of a particular moment in the television airing of an animated movie called Castle in the Sky .

Much of this is thanks to an sweeping effort to rebuild the site using a software programming technology called Scala. And the company may be expanding into data centers in other parts of the world, so that it serve foreign countries like Japan with dedicated local machines—though Hashemi declines to comment on this. But the company’s new stress testing framework plays its own important role. According to Adrian Cockcroft, a technology fellow with venture capital firm Battery Ventures who previously served as a chief architect at Netflix, another company that deals with rather usual types and amounts of online traffic, this sort of thing isn’t easy.

‘We can’t test outside of business hours. We don’t have business hours.’

“As soon as you get to enormous scale, the off-the-shelf testing products fail,” he says. “You have to synthesize so much traffic with a pattern that actually matters. You have to put a lot of thought into what the traffic pattern is, and it’s quite hard, then, to actually build it. There are certain subtleties to this.”

As other services across the net continue to grow, they too will face similar testing problems, and the good news is that companies like Netflix and Twitter are showing the way. Netflix has opened sourced many of the tools it has built to test its site, and Twitter is a company that works in similar ways, sharing many of its software creations with the world at large in an effort boost the larger community of sites and services.

Twitter has already open sourced a tool called Iago that generates the “fake traffic” for its stress tests, and though it has not released its stressing testing framework for carefully building and monitoring these tests—the thing doesn’t even have a name—the company could do so in the future. That could come in handy. After all, the Japanese aren’t going anywhere. Nor is the rest of the net.

Why Your Library May Soon Have Laser Cutters and 3-D Printers

Ben Wiseman

Visit the downtown branch of the Chattanooga Public Library and you'll find the usual stuff: rows of books, magazines, and computers. But walk up to the fourth floor and there's something unexpected. It's a “makerspace”—complete with a laser cutter, a zine lab for making paper publications, and a 3-D printer. There's even a loom.

When it opened in spring 2013, the maker floor—formerly unused and filled with decrepit equipment—became a massive hit, and up to 1,200 patrons attended events there. “Normally you hold a library event and you get six people,” says Meg Backus, the systems administrator and chief maker for Chattanooga. But this new floor gives patrons access to new forms of literacy, ones they hunger after: design, programming, video editing, book writing, and website building. Consider it a glimpse into the future of libraries. They're becoming places to not just imbibe knowledge but create it—physically. Many people don't have access to classic hacker spaces, are intimidated by them, or can't afford them. “But here all you need is a library card,” says CJ Lynce, who runs a similarly equipped space at the Cleveland Public Library.

Chattanooga and Cleveland aren't the only cities giving this new kind of library a try. A survey by John Burke at Miami University found that 109 libraries in the US had a makerspace or were close to opening one. Others are hosting events like Wikipedia edit-a-thons, where residents plumb the library's resources to create articles about local history. (One library even has its own farm.) This ferment is attracting patrons; a Pew Internet survey found that these new modes bring in folks who normally shun libraries, typically men and people with limited education.

Ezra Reynolds is an example. As a kid he visited Chattanooga's main branch regularly but eventually stopped. Today he works assisting people with physical disabilities, and a year ago he adopted a son (now 2) whose arms end below the elbow. When Reynolds heard about the 3-D printer, he made his son a bunch of customized prostheses, including utensil- and pencil-holders. “This is what got me back in the door to the library after probably a 15-year hiatus,” Reynolds says. When he visits the library now, he often shares his new skills. This is another part of the trend: spaces where people interact. Older folks teach sewing to the younger ones, who in turn teach them laser etching.

But what about books? Public Library Association research shows that people have checked out slightly fewer materials in recent years. And Pew found that about a third of patrons are opposed to makerspaces if they displace books. But while I'm just as sentimental about the primacy of hard copy, the librarians aren't. As they all tell me, their job is helping with access to knowledge—not all of which comes in codex form and much of which is deeply social. Libraries aren't just warehouses for documents; they're places to exchange information. “Getting people in a room, talking and teaching each other, is huge,” Backus says. Nor are the makerspaces necessarily expensive. The Chattanooga project cost only $25,000.

You have to give the librarians credit. Stereotype says they're fusty, but the reality is absolutely the opposite. Over and over they've adapted to new information tools, from microfiche to CD-ROMs to the Internet. Now this—possibly the best example I've seen of how a storied institution embraces change.

Angry Nerd: Are You Ready for Grant Morrison’s Mind-Bending Multiversity?

The Multiversity spans 52 parallel universes and features characters like a Nazi Superman and a vampire Batman. Keeping track of everything requires an infographic. It’s a lot. That said, Grant Morrison’s new, mind-bending mini-series still encompasses everything that’s right in the alternate reality comics world. Angry Nerd is ready to dive in.