Facebook’s New Data Center Is Bad News for Cisco


Data_Colection_Mobile

Then One/WIRED



Facebook is now serving the American heartland from a data center in the tiny town of Altoona, Iowa. Christened on Friday morning, this is just one of the many massive computing facilities that deliver the social network to phones, tablets, laptops, and desktop PCs across the globe, but it’s a little different from the rest.


As it announced that the Altoona data center is now serving traffic to some of its 1.35 billion users, the company also revealed how its engineers pieced together the computer network that moves all that digital information through the facility. The rather complicated arrangement shows, in stark fashion, that the largest internet companies are now constructing their computer networks in very different ways—ways that don’t require expensive networking gear from the likes of Cisco and Juniper, the hardware giants that played such a large role when the foundations of the net were laid.


“It makes for a very compelling story,” says Carl Perry, a network architect with the software giant Red Hat who has extensive experience in building massive computer networks inside various cloud computing companies. “It’s about solving the problems that the traditional networking companies just haven’t been able to do.”


From the Old to the New


Traditionally, when companies built computer networks to run their online operations, they built them in tiers. They would create a huge network “core” using enormously expensive and powerful networking gear. Then a smaller tier—able to move less data—would connect to this core. A still smaller tier would connect to that. And so on—until the network reached the computer servers that were actually housing the software people wanted to use.


For the most part, the hardware that ran these many tiers—from the smaller “top-of-rack” switches that drove the racks of computer servers, to the massive switches in the backbone—were provided by hardware giants like Cisco and Juniper. But in recent years, this has started to change. Many under-the-radar Asian operations and other networking vendors now provide less expensive top-of-rack switches, and in an effort to further reduce costs and find better ways of designing and managing their networks, internet behemoths such as Google and Facebook are now designing their own top-of-racks switches.


This is well documented. But that’s not all that’s happening. The internet giants are also moving to cheaper gear at the heart of their massive networks. That’s what Facebook has done inside its Altoona data center. In essence, it has abandoned the hierarchical model, moving away from the enormously expensive networking gear that used to drive the core of its networks. Now, it uses simpler gear across the length and breadth of its network, and by creating a new way of routing traffic across this network, it can use this gear to actually improve the efficiency of the data center—improve it by leaps and bounds.


Cut From Fresh Fabric


Facebook calls this a new “data center fabric.” If you’re interested in the details, you can read up on them in a blog post penned by Alexey Andreyev, the lead engineer on the project. But the long and the short of it is that Andreyev and his team have created a network that’s modular.


The network is divided into “pods,” and the company can add more pods whenever it likes. This means it’s much easier to expand the network, says Najam Ahamd, who helps oversee networking engineering at Facebook. But it also means that Facebook can more easily and more quickly move data across the network.


If you think about how Facebook works at all, you probably think about information traveling from a Facebook data center to your phone. But the Facebook application is now so complex—drawing on information from so many different servers—there’s actually more information flowing within the Facebook data centers than information traveling between the servers and people like you. According to Ahmad, there’s an order-of-magnitude difference between the two. The new data center fabric is designed to help deal with all that extra traffic inside the data center.


Part of the trick is that Facebook uses what are called “layer3″ protocols to drive the entire network, all the way from the middle of the network to the servers. Basically, this means that machines can more easily send data to any other machines on the network. “It gives you a lot more flexibility,” Perry says. But the other part is that in the middle of the network, the company isn’t relying on enormously expensive switches to do the heavy lifting.


On the Horizon


So, Facebook’s new data center is not only more efficient, it’s cheaper to build—at least in relative terms. “Moving from a small number of large switches to a greater number of smaller switches was one way to reduce complexity and allows us to scale,” says Ahmad. “It is also less expensive because of the competitive market. These smaller switches are available from a wider number of vendors.”


But this is about more than Facebook. The other large internet companies are moving in this direction as well. Carl Perry says he designed something similar—though on a smaller scale—inside cloud services such as Dreamhost. The kind of network Facebook has built in Altoona, he says, is what so many others will build in the future. “This is something that a lot of us have seen coming on the horizon.”



No comments:

Post a Comment