Don’t Get Bullied by IT


Image: bnilsen/Flickr

Feeling bullied by IT? Image: bnilsen/Flickr



In a modern world of self-service technology, enterprises have been introduced to a new technology — shadow IT. Shadow IT, also known as rogue IT or stealth IT, can be defined as IT systems and hardware and software solutions that are not approved or supported by an organization’s IT department. This includes, but is not limited to, technologies such as personal smartphones, cloud services, portable USB drives, external email servers such as Gmail, instant messaging, among other third party applications.


Shadow IT tends to come with a negative connotation because of the struggles it presents for enterprise IT departments and is usually avoided or prohibited. However, the reality is that with rampant increase of consumerizaton of IT, cloud adoption and BYOD, enterprise IT departments need to confront the challenges that come with shadow IT head on rather than ignore them.


One of the biggest issues that comes with shadow IT is enterprise security, especially when the applications being used are not subject to the same security measures applied to pre-approved technology. Depending on the industry, shadow IT can also cause a compliance concern, especially when employees use free third party cloud storage services, such as Dropbox, to store corporate data, making them more susceptible to malware and hackers.


Shadow IT can also create data silos because the application is not properly integrated into an organization’s existing network. IT departments are finding it difficult to maintain control of enterprise data due to the rapid adoption of public cloud storage. Because the public cloud rarely has an established relationship or connection with the company’s network and service provider, it is often difficult for IT teams to effectively monitor activity in these applications. If these applications are not set up to properly deploy on the company’s network, it can also negatively impact the bandwidth and software application protocols.


With all the challenges of shadow IT, awareness is key, and there is potential for an even bigger problem if CIOs and IT teams stick their heads in the sand or push back on the adoption of these technologies. If these issues are not addressed, IT departments lose control of their systems and network. Ultimately, they are opening up the possibility for the company’s assets, regulations and brand reputation to be irrevocably compromised.


Instead of ignoring the problem, IT experts should take the necessary steps to take control of shadow IT. The first step is for IT teams to properly educate themselves. Taking the time to understand the problems at hand will help them to audit their networks and solutions more actively and effectively. It will also help them to educate their employees about the inherent risks that come with these applications. In addition to education, IT must find a way to monitor and control the activity that takes place within these applications.


By overseeing application deployment and acquisition, IT is in the position to advise business units as to their various options and institute best practices. This knowledge exchange can only be built from IT’s engagement with the organization. Understanding those challenges and how they can be resolved is critical if IT is to leverage SaaS and users are to gain the kind of experience they expect from enterprise-grade tools.


SaaS and cloud services aren’t going away and IT departments would be wise to develop a cohesive IT strategy and a baseline set of regulations for the use of third party applications. They may be pleasantly surprised to find that these solutions increase employee efficiency and productivity.


Damon Ennis is SVP of Products at Silver Peak.



From Big Data to Actionable Data: Has Our Biology Failed Us, or Have We Failed to Use It?


Image courtesy of Thinkstock.

Image courtesy of Thinkstock.



The 21st Century has seen technology contribute life-altering realities like getting from New York to Beijing in less than 14 hours or grandparents using Facetime to see their newly anointed “Princes” and “Princesses” of future generations. New technologies have not only democratized widespread access of information worldwide, it has created new ways of providing cheaper and better products to improve quality of life. Yet with all the advancements made, healthcare continues to operate through an outdated and archaic paradigm.


To be fair, a number of new discoveries have been made through technologies such as next-gen sequencing, high-throughput screening processes, and robotics — however they continue to be used on the same hypothesis-driven, chemical screening platform from 30 years ago. Why are we screening chemicals first and looking to biology last? Most drugs developed today are on the market for years before we can even fully grasp their impact or effect.


Simply put, we fail to engage the biology upfront in the process. Considering that on average it takes roughly $1.7 billion and 12-14 years to bring a single drug to market, this is a costly proposition. And our healthcare system is buckling under this burden.


The very nature of drug development today is based on chemical screening, but the body is primarily driven by biology. We have the means to enable cutting-edge technologies to create an elaborate and sophisticated cellular intelligence community.


This “back to biology”-approach can identify key biomarkers driving a disease microenvironment — and then create nonchemical treatments to normalize it. By listening to our own biology, our cells can take part in guiding the drug discovery process. The result? A system where the data generates the hypotheses, rather than the hypothesis blindly ‘seeing what sticks.’


Beyond looking to the data, we also need to go much deeper than our current system provides. Genomics has been heralded as the future for discovery, but on its own does not present the full picture of health and disease. We don’t put our roofs on a basement, so why would we truncate human biology to only one element in our cell’s metabolism? We need to enrich the data to include proteins, lipids, metabolites, RNA, and functional data to the point where we are represented by a deep understanding of our molecular profiles rather than a single gene mutation.


Technology is making it more possible than ever before to appreciate the whole story of the patient by capturing and analyzing every aspect of their cellular makeup. Removing bias through the use of mathematical algorithms and supercomputers will allow us to process this information — making it actionable data rather than just “big data.”


As an industry, we have to look inward and ask ourselves: can we do better? By going back to the biology and letting the data guide the discovery, progress is viable. Eliminating the need for chemical screening brings hope for cutting the time for drug development and cost for doing so in half. Additionally, using biology as the basis for drug compounds rather than chemicals means more holistic treatments with less side effects and comorbidities.


Eighteen percent of the US GDP is spent on healthcare and the current state of affairs has much room for improvement. Technology is creating a common sense approach to understanding human populations more effectively. Biology, coupled with computational power will create a powerful roadmap for our future.


Next generation medicine demands a new approach. The good news is that it is possible, the future is bright, and opportunities for tackling some of our most crippling diseases are directly in front of us. Steve Jobs once portended that the intersection of biology and technology will be the greatest discovery of our generation. The tools are all there. Now we just have to harness them to create the medicines of the future.


Niven R. Narain is the co-founder, President and Chief Technology Office of the biopharmaceutical company Berg.



Gadget Lab Podcast: Give Me Twittering or Give Me Death


twitter-bird

Ariel Zambelich/WIRED



It’s been a crazy week in the data streams. Twitter has once again proven to be an empowering tool that lets bystanders become a multifaceted reporting force, as in the case of the civil unrest in Ferguson, Missouri. Mat and Mike use the recent events as a springboard to reflect on how social media affects our perception of the real world in both good and bad ways. How do Twitter and Facebook know what’s important to you? What’s better: curated streams or real-time firehoses? Should Twitter hide disturbing content or censor speech? And is Facebook in danger of losing its value as it becomes a cozier place for consumer brands? Mat has some answers to that last question, which he gathered during his recent “Like everything” Facebook experiment.


Listen to this week’s episode or subscribe in iTunes.


Send the hosts feedback on their personal Twitter feeds (Mat Honan is @mat, Michael Calore is @snackfight) or to the main hotline at @GadgetLab.



Game|Life Podcast: Legal Pinball, Free Infinity, 10 Million PlayStation 4s


The PlayStation 4 in repose.

The PlayStation 4 in repose. Ariel Zambelich/WIRED



Wrapup and analysis of this week’s news is the order of the day on the latest Game|Life podcast.


Disney is giving Disney Infinity away for free on Wii U, and all you have to do is buy the base and a figure to play it. Sony’s sold 10 million PlayStation 4s, and nobody’s sure why. Pinball is finally legal in Oakland, California. Peter Rubin, Bo Moore and I tackle each of these in turn. Plus more!


Game|Life’s podcast is posted on Fridays, is available on iTunes, can be


Game|Life Audio Podcast


[dewplayer: http://ift.tt/1l1wfMZ]




Why and How We Must Protect the Right to Film Cops in Ferguson


Getty Images photographer Scott Olson is arrested while covering demonstrators Monday, Aug. 18, 2014, in Ferguson, Mo.

Getty Images photographer Scott Olson is arrested while covering demonstrators Monday, Aug. 18, 2014, in Ferguson, Mo. AP/St. Louis Post-Dispatch, J.B. Forbes



The standoff in the St. Louis suburb of Ferguson has had Americans, and many around the world, glued to their various screens. Two weeks after the shooting of teenager Mike Brown by a police officer, the Missouri National Guard is retreating from the city and the protests seem to be settling down. But the unrest in Ferguson stands as another painful moment in the history of the oppression of African Americans in the U.S., in which “Ferguson, 2014” will recall “Birmingham, 1963” and “Los Angeles, 1992.”


Protesters are too often restricted from using digital tools to exercise their rights to hold government accountable and participate in society.


In each case, local strife went global as U.S. communities protesting racial injustice confronted the kinds of military units that are often deployed abroad.


However, the people on the ground in Ferguson differ from the protesters in Birmingham or L.A. in one crucial way: Their ordeal is playing out on Twitter, Vine, Instagram, Facebook, and YouTube, while the mainstream media largely ignored the beginning of the conflict. Nearly every protester, journalist, and bystander at the scene is carrying a small recording device in her pocket, which is also able to signal a location, share images, and ping those nearby and around the world. In Ferguson and beyond, social media is central to citizens telling the story about what’s happening in their communities and hold governmental authorities to account.


It’s also a privacy nightmare. Recent protests around the world—like those in Iran, Egypt, London, Thailand, and Turkey—all stem from unique circumstances, yet in the digital age they share a similar problem: Protesters are too often restricted from using digital tools to exercise their rights to hold government accountable and participate in society.



Josh Levy


Josh Levy is Advocacy Director at Access, the international digital rights organization. He’s worked for more than a decade at the intersections of technology, politics, and activism.




In the last week, accredited journalists and others who sought to document the militarization of the local police were arrested and threatened, and had their equipment stripped apart. Even though it is completely legal to record the police. Other equipment, such as low-flying aircraft, was totally banned. Many in the digital rights community called for authorities to respect the hard-fought “right to record”—the constitutional right to hold government officials accountable via those small recording devices in our pockets. The ongoing arrests and harassment of people in Ferguson for the “crime” of documenting the police demonstrates the real need to keep fighting for this right.


Add to all of this long, sordid history of the U.S. government’s surveillance of civil rights leaders, dissidents, and pretty much everyone else—which has increased exponentially in the digital age—and it becomes clear that our internet-connected devices cut both ways: They help us get the word out, and yet they’re our weakest security links.


While we need laws and policies to better protect users and the values we hold most dear, including freedom of the press, it is difficult to think of legal reform when people are being beaten in the streets. There are steps that those on the ground—professional and amateur journalists, unarmed citizens, and peaceful protesters—can take right now to protect themselves.


We at Access and many of our friends have produced the Digital First Aid Kit, a one-stop-shop that offers a set of self-diagnostic tools for human rights defenders, bloggers, activists, and journalists facing online attacks. It provides guidelines for digital first responders to assist a person under threat, as well as contact information if you need digital security assistance.


When it comes to protecting the content on your phone, the Electronic Frontier Foundation just updated its Cell Phone Guide for U.S. Protesters, which offers advice on password-protecting your phone, using encrypted communication channels, taking pictures and video, and handling your phone if you get arrested. Protesters can consult both resources to protect their safety and security.


Though the nightly strife of tear gas in Ferguson may be coming to an end, the road to justice in Mike Brown’s shooting is just beginning. The rights of people on the ground to freely document what takes place there—and everywhere—must be upheld at all costs.



Sin City: A Dame to Kill For Is Beautiful, Gritty, and Near-Unwatchable


Sin City Dwight

Courtesy The Weinstein Company



A lot can change in nine years. In the near decade that’s passed since director Robert Rodriguez and Frank Miller brought Miller’s noir comic Sin City to the big screen, filmmaking technology has gotten better, comic book movies have become an official Big Deal, and 3-D has become a lot more palatable. What’s more surprising is that my ability to watch women being hit (or, to use the local parlance, watching dames get roughed up) has withered from Minimal to None. There were only a few moments when I enjoyed Sin City: A Dame to Kill For—but even then, I felt pretty bad about it.


Look, I get it. Sin City is about very bad people living in a very bad town. It’s escapism to a world in more disarray than our own; there are no heroes, and this isn’t Captain America. So yes, if you like gritty noir, problem drinking, car chases, awesome action, heavy bloodshed, and movies that look like comic books come to beautiful life, and you can handle some stereotypes and watching women get treated poorly—this movie is for you. If not, forget it. (Also, if you’ve been joining the chorus for more and better roles for women in comic-book/action movies, this might have you questioning what you wish for—but more on that later.)


A Dame to Kill For, in theaters today, opens on the same monochromatic Basin City you remember from 2005. Though this time around, probably thanks to even better computer-aided greenscreen tech, the images pop a little more strongly, and the simple color palette makes the 3-D legitimately great. Marv (Mickey Rourke, still giving it his all under all those face prosthetics) is going after some some frat-boy types who had harassed a man on the street. Naturally, they get dealt with. But as in the original Sin City, this is just a vignette to get you in the mood, a standalone story to remind you that Marv is still a loose cannon doling out justice (he got the electric chair in the last movie, but Dame‘s jump-around timeline means he’s alive and well here).


Marv is also the through-line to Rodriguez and Miller’s other stories. He’s at Kadie’s the night that card sharp Johnny (Joseph Gordon-Levitt, pulling out his best wise guy) beats Senator Roark (a bone-chilling Powers Boothe) at poker, only to lose in every other way possible. He’s also around as Nancy (Jessica Alba, even more fallen-angel than she was in 2005) plots revenge for the suicide death of Hartigan (Bruce Willis, pulling his best ghost acting out of the closet), who saved her from rape and death at the hands of Roark’s son Yellow Bastard. And, most centrally, Marv is there to help out his old friend Dwight (Josh Brolin, ably taking over for Clive Owen), who gets sucked back into the world of Ava Lord (Green, the titular “dame to kill for”). Over the course of the film’s 102 minutes, all of these loosely intertwining plots resolve themselves in bloody ways.


Eva Green as Ava Lord.

Eva Green as Ava Lord.

Courtesy The Weinstein Company



As with 2005′s Sin, the look is visually compelling and gorgeous. No one before or since Rodriguez has managed to make a film look more like a comic book come to life (as opposed to just a live-action adaptation) while also keeping the excitement of flipping its pages. He pulls off a similar feat here, but—like Johnny doing a one-handed card shuffle a with his left hand while cracking “I’m ambidextrous”—it ends up feeling like the same old trick. Fans of the first film will likely enjoy A Dame to Kill For for more scenes of Gail Running Old Town or Miho Slicing People or Nancy Dancing, but may be disappointed when those things become rote.


The thing that never seems perfunctory is Green, who would win the Oscar for Gracefully Acting Her Way Through a Lot of Bullshit, if there was such a thing. Ava Lord, a former lover of Dwight’s who pulls him (and Marv) into her web with tales of mistreatment by her husband and his man Manute (Dennis Haysbert, taking over for Michael Clarke Duncan), has always been a twisted character, and here she does much of the same manipulating she did in Miller’s book: lying to Dwight and seducing cops to get what she wants. Green goes all in on the role, and as one reviewer has already noted, her performance here sees her “claim membership in the pantheon of film noir leading ladies alongside Jane Greer, Gloria Grahame, Marie Windsor, Peggy Cummings, Lizabeth Scott…” She’s mesmerizing to watch. It’s just unfortunate that she has to be so compelling while making false rape accusations, while getting knocked across a room, and while kissing the man who did it. (Dwight has a skill for that, apparently—he did the same thing with Gail in the first Sin City.) It’s disheartening and she deserves better.


Which brings us back to the women of Basin City. Yes, there are amazing femmes fatales—Rosario Dawson as Gail, Jamie Chung (in for Devon Aoki) as the samurai assassin Miho, and Alba’s Nancy all give as good as they get. But there are also women handcuffed to beds and threatened with death, women called “skank,” women tortured by men who are trying to honor their son’s memories (if you know the first Sin City you know what this means). These things are all true to Miller’s original work, and readers of the comics know they are coming, but in a movie with so many great female actresses, it’s just unfortunate so few of them actually get much respect on screen.


Any of them could take a true heroine character and make her amazing. And in a climate where the conversation has turned to “When will we get a female-lead comic book flick?” it’s disappointing to see a comic-book movie with a lot of strong female performers…who all too often get treated like shit. Sure, Nancy may go all Katniss-with-a-crossbow at one point, but A Dame to Kill For far too often confuses women with weapons for women who are empowered. And no matter how strong Nancy is, the air gets sucked out of the room when Marv, the closest thing to a Lancelot here, watches her pull the trigger and says, “I hope you don’t mind me sayin’ this—you look hot.” (It’s only right to point out that the men of Sin City aren’t rendered very three-dimensionally, either. Mostly they’re just on a mission to out-man each other in all the usual alpha ways.)


Honestly, it’s probably wrong to expect more—all gratuitous everything is the Rodriguez/Miller brand to the core. Sin City stories aren’t meant to be deep, and aren’t here to be politically correct. That’s fine. A lot of un-PC material can be compelling, but what was compelling when Rodriguez and Miller released their first flick in 2005 gets little more than a re-tread here, and a lot has changed since then. As a cinematic venture, A Dame to Kill For is a perfectly fine Sin City adaptation, but Miller and Rodriguez are releasing this film in a time much different than when its predecessor—let alone its source material—were released. After nine years, it’s a shame they haven’t matured.



Tech Time Warp of the Week: Watch Apple’s Awkwardly Wrong Prediction of the Future From 1987


Apple has a long history of weird, self-serving company videos that elevate its computer-and-gadget operation to nothing short of a global superpower. But this is something else.


In 1987, two years after founder Steve Jobs was run out of the company, Apple produced a video that predicted a phantasmagorically glorious future for the maker of the Macintosh. It may be the oddest, most brilliant, and horribly wrong prediction anyone has ever made. With the 7-minute clip, which you can enjoy above, CEO John Sculley, Apple II chief Del Yocam, Apple exec Mike Spindler, and that other cofounder—Steve “The Woz” Wozniak—envisioned what Apple would be like in the year 1997. And let’s just say they didn’t hit the bullseye.


In Cupertino’s vision of a future 1997, Apple dominates the news, the markets, even stand-up comedy. Wall Street loves the company, and its growth is skyrocketing. The original Macs haven’t changed all that much, and Apple computers are everywhere—in living rooms and kitchens, at the airport, on planes, in space, and, well, on your face.


Yes, history would play out somewhat differently than the Applemaniacs hoped it would. By the real 1997, Apple was in the gutter. Sculley had been kicked out of the company four years before, only to be replaced by Spindler, who would join him in the graveyard of ousted CEOs three years later.


In Cupertino’s vision of a future 1997, Apple dominates the news, the markets, even stand-up comedy.


Sure, Apple’s late-80s video was meant as a bit of a joke. But humor has never been the company’s strongpoint. Exhibit A: the video’s prediction that the Apple of 1997 would sell a version of its ancient Apple II desktop computer known as the V.S.O.P. Apparently, this stands for “Very Smooth Old Processor.” As Yokum puts it: “This being 1997, some people think the Apple II concept is getting old. We don’t agree.”


Other bits don’t miss the mark quite so badly. The video predicts something called VistaMac, which isn’t all that different from Google Glass, the digital eyewear that is now very much a reality. Of course, unlike the VistaMac, Google Glass doesn’t take floppy disks—and it doesn’t look so very late-80s.


The video also presages a few things we now take for granted, including recommendation systems and ubiquitous virtual assistants that help us navigate the world. “A computer that talks is no big deal. A computer that listens? That’s a breakthrough,” says Woz. “Apple computers have always been friendly, but we’ve gone from friendly to understanding.” Sounds a bit like Siri—though we hasten to add that even Siri doesn’t quite work as promised.


To be fair, Apple did have the last laugh. Though the company’s crazy predictions didn’t exactly come true, 1997 actually turned out be a very important year for the company. In 1997, Steve Jobs came back, as Apple purchased his new company, Next Computer. And he was smart enough to realize the V.S.O.P would never fly.



The Credit Card That’ll Replace All Your Plastic Is Finally Here (Kind Of)


Coin. Photo: Josh Valcarcel/WIRED

Josh Valcarcel/WIRED



When Coin released the first video of its über credit card, the response was enormous.


After 40 minutes, even though it was still just a prototype, 1,000 people had evidently forked over $50 for the super-slim electronic device that stores multiple credit card numbers and lets you use any of them with the mere push of a button. That took the company past its $50,000 pre-order goal. Just a few hours later, it had received a massive 20,000 orders for the device, which slides through checkout-counter card readers much like any other piece of plastic. Within two weeks, more than six million people had viewed the launch video that sent Coin viral. Apparently, there’s an awful lot of pent up frustration over the supposed problem of a wallet stuffed with too many credit cards.


Today, much of that frustration is still pent up. Nine months later, most of those who pre-ordered the device are still waiting for it to show up on their doorsteps. Though the company initially promised a summer 2014 ship date, only about 1,000 customers have received a version of the device that Coin quietly released for beta-testing, mostly in San Francisco. But relief is on the way.


20140821_coin-02 copy

Josh Valcarcel/WIRED



On Friday, Coin announced the launch of a new beta-testing program, open to the first 10,000 customers on the wait list who choose to participate. In exchange for taking part, they get a test version of Coin before the device’s official release, now slated for spring 2015. The bad news is they will still have to pay to get the final first-generation version of Coin when it arrives (though they’ll only have pay $30 for it, a big discount from the regular, non-pre-order price of $100). The good news: WIRED has checked out the beta version of the Coin card, and it actually seems to work.


“The launch received more attention than we thought it would, which was good for us,” Coin founder and CEO Kanishk Parashar told WIRED during a recent visit to his company’s San Francisco offices. “But it also increased the scope of the work we had to finish.”


A Coin for Every Counter


When all those orders first started rolling in last November, Coin the company spanned about six people working in a small carpeted room downstairs from Dropbox, and Coin the product looked like a white door-key card with a digital watch face built into one corner. Today, Coin has more than 30 employees working in a spacious basement that once housed a meat-smoking operation, and the current version of namesake device looks much like the mock version in the launch video.


Parashar says the main challenge to getting more Coins out the door has been scaling up manufacturing while working to ensure the devices are secure and can work everywhere. In the Coin office, tables teem with credit-card terminals of every make and model, each with its own idiosyncrasies that Parashar says the company is trying to tease out before making Coin available to everyone. “They’re almost the same but slightly different,” Parashar says of the many terminals users might find at checkout counters. “We have to be the super-set.”


Coin credit card testers. Photo: Josh Valcarcel/WIRED

Josh Valcarcel/WIRED



The beta release is part of that process, he says, an attempt to identify “corner cases” in which users encounter situations where swiping Coin doesn’t work perfectly. Even then, he says, users won’t be left without a way to pay. The app that syncs cards to the Coin device still contain those cards’ numbers and other necessary information, which can be punched in manually. But Parashar says he wants to avoid putting users in a situation where they need the manual option as a fallback. “We want to make sure the experience is top-notch,” he says.


Promises, Promises


On Coin’s Facebook page, commenters impatient for a firm release date are plentiful. And in general, their angst is not unreasonable. The recent history of crowd-funded hardware is littered with prototypes that never became full-fledged products.


But the version that I saw appeared to function very much as promised. Parashar synced his Coin with the Coin app, to which users upload their cards by swiping them through a reader attached to a smartphone audio jack. The app then uses low-energy Bluetooth to transfer the card data with the Coin itself.


On the front of the device, I used a nearly flat button to toggle through the different cars—multiple Visas, an Amex, a gift card. A small black-and-white screen displayed four-character names—VISA, AMEX, GIFT—to identify the different cards, along with the cards’ last four digits and expiration dates. But the most encouraging moment was the swipe itself. Trying out multiple stored cards on multiple card terminals in the office, the card data showed up. Receipts were printed. Everything needed to use Coin to pay for something seemed to be working as promised.


If Parashar can get enough Coins out the door, the future of digital payments could have one more candidate.


A working beta version may or may not be enough to quell the impatience of Coin customers still waiting to shrink down their wallets. But with 10,000 Coins on the way, Parashar and company seem to have figured out not only how to get Coin to work, but how to get Coins made. The next interesting test will be to see how Coin gets used once a bunch of them are out on the streets. Will they end up just another geek novelty, a solution to a “problem” that wasn’t really a problem in the first place? Or will they be the digital-analog answer for how everyone will pay in the 21st century, combining the best parts of paying by phone with the familiar feel of swiping plastic?


Right now, Silicon Valley is sure that traditional ways of paying belong to the past, but no one has quite figured out payments’ definitive future. If Parashar can get enough Coins out the door, that future could have one more candidate.



Friday Mushrooms [Aardvarchaeology]



zvamp Has it really been almost four years since I blogged about mushrooms? This afternoon me and my wife repeated our September 8, 2010 expedition to the hills between Lakes Lundsjön and Trehörningen and picked almost a kilo of mushrooms in a bit more than an hour. We got:



  • King bolete, Stensopp/Karl Johan, Boletus edulis

  • Bay bolete, Brunsopp, Boletus badius

  • Orange birch bolete, Tegelsopp, Leccinum versepelle

  • Birch bolete, Björksopp, Leccinum scabrum

  • Entire russula, Mandelkremla, Russula integra

  • Two kinds of red or brown brittlegill, mild-tasting and thus non-poisonous. Scandyland has more than 130 species of brittlegill, none are deadly and luckily there’s a simple taste test for which ones are good to eat.




The Data Scientist on a Quest to Turn Computers Into Doctors


Jeremy Howard.

Jeremy Howard. Jon Snyder/WIRED



Some of the world’s most brilliant minds are working as data scientists at places like Google, Facebook, and Twitter—analyzing the enormous troves of online information generated by these tech giants—and for hacker and entrepreneur Jeremy Howard, that’s a bit depressing.


Howard, a data scientist himself, spent a few years as the president of the Kaggle, a kind of online data scientist community that sought to feed the growing thirst for information analysis. He came to realize that while many of Kaggle’s online data analysis competitions helped scientists make new breakthroughs, the potential of these new techniques wasn’t being fully realized. “Data science is a very sexy job at the moment,” he says. “But when I look at what a lot of data scientists are actually doing, the vast majority of work out there is on product recommendations and advertising technology and so forth.”


So, after leaving Kaggle last year, Howard decided he would find a better use for data science. Eventually, he settled on medicine. And he even did a kind of end run around the data scientists, leveraging not so much the power of the human brain but the rapidly evolving talents of artificial brains. His new company is called Enlitic, and it wants to use state-of-the-art machine learning algorithms—what’s known as “deep learning”—to diagnosis illness and disease.


His basic idea is to create a system akin to the Star Trek Tricorder, though perhaps not as portable.


Publicly revealed for the first time today, the project is only just getting off the ground—”the big opportunities are going to take years to develop,” Howard says—but it’s yet another step forward for deep learning, a form of artificial intelligence that more closely mimics the way our brains work. Facebook is exploring deep learning as a way of recognizing faces in photos. Google uses it for image tagging and voice recognition. Microsoft does real-time translation in Skype. And the list goes on.


But Howard hopes to use deep learning for something more meaningful. His basic idea is to create a system akin to the Star Trek Tricorder, though perhaps not as portable. Enlitic will gather data about a particular patient—from medical images to lab test results to doctors’ notes—and its deep learning algorithms will analyze this data in an effort to reach a diagnosis and suggest treatments. The point, Howard says, isn’t to replace doctors, but to give them the tools they need to work more effectively. With this in mind, the company will share its algorithms with clinics, hospitals, and other medical outfits, hoping they can help refine its techniques.


Deep-Learning Doctors


Howard says that the health care industry has been slow to pick-up on the deep-learning trend because it was rather expensive to build the computing clusters needed to run deep learning algorithms. But that’s changing.


Jeremy Howard and senior data scientist Choon Hui Teo look at some of the latest research in deep learning for detection of mitotic activity for detecting breast cancer.

Jeremy Howard and senior data scientist Choon Hui Teo look at some of the latest research in deep learning for detection of mitotic activity for detecting breast cancer. Enlitic



Howard isn’t the only one exploring these possibilities. He says academic researchers such as Stanford computer scientist Daphne Koller have already made progress in applying deep learning to medicine. And then there’s IBM, whose Jeopardy-winning supercomputing system, Watson, is using machine learning to aid doctors at New York’s Memorial Sloan-Kettering Cancer Center.


But Watson doesn’t use deep learning per se—it uses older techniques—and Howard says the overall approaches taken by two companies are very different. IBM is essentially feeding Watson medical text books in an attempt to teach it what doctors already know, he says, while Enlitic is feeding the raw data into its machines, letting the computers find the patterns between certain symptoms and treatments with different outcomes. In other words, Watson mimics medical science in the pursuit of creating a artificial super doctor that knows more than any single doctor could ever learn. But Enlitic could potentially make new discoveries by uncovering previously unnoticed patterns in the data.


The Real Challenge


The real challenge, Howard says, isn’t writing algorithms but getting enough data to train those algorithms. He says Enlitic is working with a number of organizations that specialize in gathering anonymized medical data for this type of research, but he declines to reveal the names of the organizations he’s working with. And while he’s tight-lipped about the company’s technique now, he says that much of the work the company does will eventually be published in research papers.


Even with expert help, trying to create such a system is an intimidating task. After all, the hope is that people will trust their lives to Enlitic. “Certainly, we’re doing something more risky than giving someone a product recommendation they didn’t like,” Howard says. But he’s undaunted. After all, the potential reward is far greater.



OS X Yosemite: How to Use the New, More Powerful Spotlight Search


yosemite-spotlight

Apple



Apple’s latest desktop operating system, OS X Yosemite, won’t officially come out until sometime this fall. But now that its public beta is open, both developers and a large number of Mac owners are able to use a preview version of the landmark OS.


For those who’ve just started using the beta, or are just anticipating its launch later this year, we’ve got some tips on how to best take advantage of the redesigned OS and its many new features. In this edition, we take on Apple’s systemwide search, Spotlight.


Spotlight always felt like a secondary, behind-the-scenes tool in previous versions of OS X, but with Yosemite it’s grown into a key feature. Previously, when you summoned Spotlight with a command-spacebar press (or a click of the magnifying glass icon in the upper right of the homescreen), the Spotlight search field popped up in the upper right, politely out of the way of the rest of your desktop experience. Not anymore. Now, a much larger Spotlight search bar pops up front and center, cursor blinking before the words “Spotlight Search.”


There’s reason for the (forgive me) spotlight on Spotlight. It is a much more powerful search tool than it was in Mavericks or previous versions of OS X, largely thanks to a much wider pool it can source answers from. Spotlight now turns up results from your installed apps, Mail, Messages, your calendar, saved files, images, folders, your bookmarks and web history, dictionary, and even the web (through Bing), Maps, the App Store, and iTunes Store. Hits are listed in a panel on the left, organized by category, with a most relevant “Top Hit” automatically selected at the top of this list. To the right, a QuickLook panel offers a look at what that Top Hit (or any other option you select from the left panel) holds. It takes a few seconds for this preview to load, but if you’re looking for a particular email for example, it’s certainly faster than hopping into the mail app or browser then doing a search.


yosemite-spotlight2

Screenshot: WIRED



Here are a few examples of how it works: A search for “kitten” yielded an iTunes Store top hit of a band/album called Kitten; the Wikipedia entry for kitten; a handful of links from my web history that involved kittens; and lastly, a Dictionary definition for the word. If I just search the letter “C,” the top hit is Google Chrome, followed by other applications like Calendar, HipChat, and Contacts, then system locations that start with the letter C like iCloud, CDs and DVDs, and Mission Control, then contacts that start with C, and events and reminders (cat food, cat litter, cucumber) that start with that letter. If I search a specific location like “Mexico Au Parc,” a local burrito joint, the top hit comes from Maps, and it shows a closeup of the business’s location followed by information pooled from Yelp like its rating, photos, hours, and reviews, along with buttons you can press to get directions to or from the location.


If the results order that pops-up doesn’t satisfy you—say, if you don’t want iTunes Store results to be your top hit— you can customize the order in which results display (and where results are sourced from). Just go to System Preferences > Spotlight, and drag and drop categories in the order you want them listed, or uncheck them if you don’t want Spotlight to search those locations.


Spotlight is poised to make search and learning on your Mac faster and more streamlined than ever before. It also feels like a launchpad for even faster voice-based search in the future if Apple ever decides to bring a form of Siri to the desktop. For now though, hitting command and spacebar to get this level of search results is truly time saving, and I’m excited to start using the final, finished version when it debuts this fall.



The Legendary Photographer Who Captured the Softer Side of NYC


Saul Leiter (1923-2013), the subject of Tomas Leach's new documentary film 'In No Great Hurry.'

Saul Leiter (1923-2013), the subject of Tomas Leach’s new documentary film ‘In No Great Hurry.’ Tomas Leach



Saul Leiter long has been recognized by those in the art world as one of the most important street photographers of his time. But it wasn’t until after his death in November that the New Yorker’s work became widely known and loved by the general public.


Much of the renewed appreciation can be attributed to the film In No Great Hurry: 13 Lessons In Life From Saul Leiter , which premiered in select theaters late last year and is getting broader attention thanks to the recent release on DVD. The documentary, by filmmaker Tomas Leach, captures the eternally modest artist in his later years.


“He’ll be generally recognized as one of the great photographers of the 20th century soon enough,” says Leach, who spent months following Leiter.


Part of the reason Leiter remained in the shadows was the fact his work wasn’t as “edgy” as that of other street shooters of his time. While his contemporaries like Diane Arbus and Garry Winogrand were shooting twitchy black and white photos of New York laid bare, Leiter focused on the softer and more colorful aspects of the Big Apple—its iconic yellow cabs, the glamorous high fashion, the omnipresent flower stalls. It was easy to be seduced by the trending and more critical photographers and to overlook the subtle beauty of Leiter’s imagery.


He was also ahead of his time. He shot color photos decades before William Eggleston kickstarted the re-emergence of color photography in the US.


Leach, who didn’t know much about Leiter when he started the project, decided to make the film after readingt a copy of Leiter’s book Early Color (2006). He was instantly mesmerized by Leiter’s color street photographs from the 1940s and 1950s. Leach cold-called the Howard Greenberg Gallery, which represents Leiter, and said Leiter should be the subject of a documentary. The gallery agreed but said there would be no guarantees of the artists’ participation. Leiter was famous for shunning the spotlight and repeatedly said he never wanted to be famous.


“The gallery was interested, but told me that Saul did things his own way,” Leach says.


Leach persisted and sent the gallery his past film work, which it showed to Leiter. The octogenarian eventually came round, invited Leach to New York for coffee. And so began their “slow courtship.”


In No Great Hurry provides a glimpse of Leiter’s daily life in Manhattan. It’s a slow burn, but that was the pace of Leiter’s life, especially in his twilight years. One might infer that slowing down, looking up and truly seeing is key to capturing a decisive moment. Leiter’s New York increasingly focused on his neighborhood.


“A couple of blocks really,” Leach says. “One of his great abilities was in seeing beauty in the things around him.”


Leach, like so many others, admired Leiter’s ability to be himself in his work and in his life. He was stuck in his ways but not cantankerous. Deliberate, yet open. Cautious, but loving. This deep respect set the tone for the film.


“Saul’s greatest lesson was to believe in yourself and your own way of seeing the world,” Leach says.


In No Great Hurry is an antidote to the modern art world, where self-promotion is vital to success. Leiter has proven good work will be recognized in due time and one need not aggressively self-promote or seek attention. Leach says this humble, understated approach made Leiter all the more charming and a worthy subject for a documentary.


“Every moment with Saul was memorable for me,” says Leach.


Shortly before Leiter passed on November 26, he and Leach watched the final cut of movie. The private screening was nerve-wracking but well received.


“Saul spent the whole time laughing at himself, and saying he talked too much,” says Leach. “But in the end, he said it was fine and fair.”


It’s a good job Leiter approved because now if you Google search ‘Saul Leiter,’ the top result is the website for In No Great Hurry. Above the Saul Leiter Wikipedia page. Leach is humbled and proud because his original goal was to address the paucity of information about Leiter.


“If the film is bringing more people to his work and showing them a bit of how special Saul was as a person, I’m delighted,” Leach says.


'In No Great Hurry' was released in January 2014 to critical acclaim and is now available to by as DVD or digital download.

‘In No Great Hurry’ was released in January 2014 to critical acclaim and is now available to buy as DVD or digital download.




How to Solve Google’s Crazy Open-Ended Interview Questions


brain

Getty



One of the most important tools in critical thinking about numbers is to grant yourself permission to generate wrong answers to mathematical problems you encounter. Deliberately wrong answers!


Engineers and scientists do it all the time, so there’s no reason we shouldn’t all be let in on their little secret: the art of approximating, or the “back of the napkin” calculation. As the British writer Saki wrote, “a little bit of inaccuracy saves a great deal of explanation.”


For over a decade, when Google conducted job interviews, they’d ask their applicants questions that have no answers. Google is a company whose very existence depends on innovation—on inventing things that are new and didn’t exist before, and on refining existing ideas and technologies to allow consumers to do things they couldn’t do before.


Contrast this with how most companies conduct job interviews: In the skills portion of the interview, the company wants to know if you can actually do the things that they need doing.


But Google doesn’t even know what skills they need new employees to have. What they need to know is whether an employee can think his way through a problem.


Of Piano Tuners and Skyscrapers


Consider the following question that has been asked at actual Google job interviews: How much does the Empire State Building weigh?


Now, there is no correct answer to this question in any practical sense because no one knows the answer. Google isn’t interested in the answer, though; they’re interested in the process. They want to see a reasoned, rational way of approaching the problem to give them insight into how an applicant’s mind works, how organized a thinker she is.


Excerpted from The Organized Mind: Thinking Straight in the Age of Information Overload. By Daniel J Levitin

Excerpted from The Organized Mind: Thinking Straight in the Age of Information Overload



There are four common responses to the problem. People throw up their hands and say “that’s impossible” or they try to look up the answer somewhere.


The third response? Asking for more information. By “weight of the Empire State Building,” do you mean with or without furniture? Do I count the people in it? But questions like this are a distraction. They don’t bring you any closer to solving the problem; they only postpone being able to start it.


The fourth response is the correct one, using approximating, or what some people call guesstimating. These types of problems are also called estimation problems or Fermi problems, after the physicist Enrico Fermi, who was famous for being able to make estimates with little or no actual data, for questions that seemed impossible to answer. Approximating involves making a series of educated guesses systematically by partitioning the problem into manageable chunks, identifying assumptions, and then using your general knowledge of the world to fill in the blanks.


How would you solve the Fermi problem of “How many piano tuners are there in Chicago?”


Where to begin? As with many Fermi problems, it’s often helpful to estimate some intermediate quantity, not the one you’re being asked to estimate, but something that will help you get where you want to go. In this case, it might be easier to start with the number of pianos that you think are in Chicago and then figure out how many tuners it would take to keep them in tune.


There is an infinity of ways one might solve the problem, but the final number is not the point—the thought process, the set of assumptions and deliberations, is the answer.


In any Fermi problem, we first lay out what it is we need to know, then list some assumptions:


1. How often pianos are tuned

2. How long it takes to tune a piano

3. How many hours a year the average piano tuner works

4. The number of pianos in Chicago


Knowing these will help you arrive at an answer. If you know how often pianos are tuned and how long it takes to tune a piano, you know how many hours are spent tuning one piano. Then you multiply that by the number of pianos in Chicago to find out how many hours are spent every year tuning Chicago’s pianos. Divide this by the number of hours each tuner works, and you have the number of tuners.


Assumption 1: The average piano owner tunes his piano once a year.


Where did this number come from? I made it up! But that’s what you do when you’re approximating. It’s certainly within an order of magnitude: The average piano owner isn’t tuning only one time every ten years, nor ten times a year. One time a year seems like a reasonable guesstimate.


Assumption 2: It takes 2 hours to tune a piano. A guess. Maybe it’s only 1 hour, but 2 is within an order of magnitude, so it’s good enough.


Assumption 3: How many hours a year does the average piano tuner work? Let’s assume 40 hours a week, and that the tuner takes 2 weeks’ vacation every year: 40 hours a week x 50 weeks is a 2,000-hour work year. Piano tuners travel to their jobs—people don’t bring their pianos in—so the piano tuner may spend 10 percent–20 percent of his or her time getting from house to house. Keep this in mind and take it off the estimate at the end.


Assumption 4: To estimate the number of pianos in Chicago, you might guess that 1 out of 100 people have a piano—again, a wild guess, but probably within an order of magnitude. In addition, there are schools and other institutions with pianos, many of them with multiple pianos. This estimate is trickier to base on facts, but assume that when these are factored in, they roughly equal the number of private pianos, for a total of 2 pianos for every 100 people.


Now to estimate the number of people in Chicago. If you don’t know the answer to this, you might know that it is the third-largest city in the United States after New York (8 million) and Los Angeles (4 million). You might guess 2.5 million, meaning that 25,000 people have pianos. We decided to double this number to account for institutional pianos, so the result is 50,000 pianos.


So, here are the various estimates:

1. There are 2.5 million people in Chicago.

2. There are 2 pianos for every 100 people.

3. There are 50,000 pianos in Chicago.

4. Pianos are tuned once a year.

5. It takes 2 hours to tune a piano.

6. Piano tuners work 2,000 hours a year.

7. In one year, a piano tuner can tune 1,000 pianos (2,000 hours per year ÷ 2 hours per piano).

8. It would take 50 tuners to tune 50,000 pianos (50,000 pianos ÷ 1,000 pianos tuned by each piano tuner).

9. Add 15 percent to that number to account for travel time, meaning that there are approximately 58 piano tuners in Chicago.


What is the real answer? The Yellow Pages for Chicago lists 83. This includes some duplicates (businesses with more than one phone number are listed twice), and the category includes piano and organ technicians who are not tuners. Deduct 25 for these anomalies, and an estimate of 58 appears to be very close.


But Wait, What About the Empire State Building?


Back to the Google interview and the Empire State Building question. If you were sitting in that interview chair, your interviewer would ask you to think out loud and walk her through your reasoning. There is an infinity of ways one might solve the problem, but to give you a flavor of how a bright, creative, and systematic thinker might do it, here is one possible “answer.” And remember, the final number is not the point—the thought process, the set of assumptions and deliberations, is the answer.


Let’s see. One way to start would be to estimate its size, and then estimate the weight based on that. I’ll begin with some assumptions. I’m going to calculate the weight of the building empty—with no human occupants, no furnishings, appliances, or fixtures. I’m going to assume that the building has a square base and straight sides with no taper at the top, just to simplify the calculations.


For size I need to know height, length, and width. I don’t know how tall the Empire State Building is, but I know that it is definitely more than 20 stories tall and probably less than 200 stories.


I don’t know how tall one story is, but I know from other office buildings I’ve been in that the ceiling is at least 8 feet inside each floor and that there are typically false ceilings to hide electrical wires, conduits, heating ducts, and so on. I’ll guess that these are probably 2 feet. So I’ll approximate 10–15 feet per story.


I’m going to refine my height estimate to say that the building is probably more than 50 stories high. I’ve been in lots of buildings that are 30–35 stories high. My boundary conditions are that it is between 50 and 100 stories; 50 stories work out to being 500–750 feet tall (10–15 feet per story), and 100 stories work out to be 1,000–1,500 feet tall. So my height estimate is between 500 and 1,500 feet. To make the calculations easier, I’ll take the average, 1,000 feet.


Now for its footprint. I don’t know how large its base is, but it isn’t larger than a city block, and I remember learning once that there are typically 10 city blocks to a mile.


How many uses can you come up with for a broomstick? A lemon? These are skills that can be nurtured beginning at a young age. Most jobs require some degree of creativity and flexible thinking.


A mile is 5,280 feet, so a city block is 1/10 of that, or 528 feet. I’ll call it 500 to make calculating easier. I’m going to guess that the Empire State Building is about half of a city block, or about 265 feet on each side. If the building is square, it is 265 x 265 feet in its length x width. I can’t do that in my head, but I know how to calculate 250 x 250 (that is, 25 x 25 = 625, and I add two zeros to get 62,500). I’ll round this total to 60,000, an easier number to work with moving forward.


Now we’ve got the size. There are several ways to go from here. All rely on the fact that most of the building is empty—that is, it is hollow. The weight of the building is mostly in the walls and floors and ceilings. I imagine that the building is made of steel (for the walls) and some combination of steel and concrete for the floors.


The volume of the building is its footprint times its height. My footprint estimate above was 60,000 square feet. My height estimate was 1,000 feet. So 60,000 x 1,000 = 60,000,000 cubic feet. I’m not accounting for the fact that it tapers as it goes up.


I could estimate the thickness of the walls and floors and estimate how much a cubic foot of the materials weighs and come up then with an estimate of the weight per story. Alternatively, I could set boundary conditions for the volume of the building. That is, I can say that it weighs more than an equivalent volume of solid air and less than an equivalent volume of solid steel (because it is mostly empty). The former seems like a lot of work. The latter isn’t satisfying because it generates numbers that are likely to be very far apart. Here’s a hybrid option: I’ll assume that on any given floor, 95 percent of the volume is air, and 5 percent is steel.


I’m just pulling this estimate out of the air, really, but it seems reasonable. If the width of a floor is about 265 feet, 5 percent of 265 ≈ 13 feet. That means that the walls on each side, and any interior supporting walls, total 13 feet. As an order of magnitude estimate, that checks out—the total walls can’t be a mere 1.3 feet (one order of magnitude smaller) and they’re not 130 feet (one order of magnitude larger).


I happen to remember from school that a cubic foot of air weights 0.08 pounds. I’ll round that up to 0.1. Obviously, the building is not all air, but a lot of it is—virtually the entire interior space—and so this sets minimum boundary for the weight. The volume times the weight of air gives an estimate of 60,000,000 cubic feet x 0.1 pounds = 6,000,000 pounds.


I don’t know what a cubic foot of steel weighs. But I can estimate that, based on some comparisons. It seems to me that 1 cubic foot of steel must certainly weigh more than a cubic foot of wood. I don’t know what a cubic foot of wood weighs either, but when I stack firewood, I know that an armful weighs about as much as a 50-pound bag of dog food. So I’m going to guess that a cubic foot of wood is about 50 pounds and that steel is about 10 times heavier than that. If the entire Empire State Building were steel, it would weigh 60,000,000 cubic feet x 500 pounds = 30,000,000,000 pounds.


This gives me two boundary conditions: 6 million pounds if the building were all air, and 30 billion pounds if it were solid steel. But as I said, I’m going to assume a mix of 5 percent steel and 95 percent air.

5% x 30 billion = 1,500,000,000

+ 95% x 6 million = 5,700,000

__________________

1,505,700,000 pounds

or roughly 1.5 billion pounds. Converting to tons, 1 ton = 2,000 pounds, so 1.5 billion pounds/2,000 = 750,000 tons.


This hypothetical interviewee stated her assumptions at each stage, established boundary conditions, and then concluded with a point estimate at the end, of 750,000 tons. Nicely done!


Now Do It With Cars


Another job interviewee might approach the problem much more parsimoniously. Using the same assumptions about the size of the building, and assumptions about its being empty, a concise protocol might come down to this.


Skyscrapers are constructed from steel. Imagine that the Empire State Building is filled up with cars. Cars also have a lot of air in them, they’re also made of steel, so they could be a good proxy. I know that a car weighs about 2 tons and it is about 15 feet long, 5 feet wide, and 5 feet high. The floors, as estimated above, are about 265 x 265 feet each. If I stacked the cars side by side on the floor, I could get 265/15 = 18 cars in one row, which I’ll round to 20 (one of the beauties of guesstimating).


How many rows will fit? Cars are about 5 feet wide, and the building is 265 feet wide, so 265/5 = 53, which I’ll round to 50. That’s 20 cars x 50 rows = 1,000 cars on each floor. Each floor is 10 feet high and the cars are 5 feet high, so I can fit 2 cars up to the ceiling. 2 x 1,000 = 2,000 cars per floor. And 2,000 cars per floor x 100 floors = 200,000 cars. Add in their weight, 200,000 cars x 4,000 pounds = 800,000,000 pounds, or in tons, 400,000 tons.


These two methods produced estimates that are relatively close—one is a bit less than twice the other—so they help us to perform an important sanity check. Because this has become a somewhat famous problem (and a frequent Google search), the New York State Department of Transportation has taken to giving their estimate of the weight, and it comes in at 365,000 tons. So we find that both guesstimates brought us within an order of magnitude of the official estimate, which is just what was required.


These so-called back-of-the-envelope problems are just one window into assessing creativity. Another test that gets at both creativity and flexible thinking without relying on quantitative skills is the “name as many uses” test.


For example, how many uses can you come up with for a broomstick? A lemon? These are skills that can be nurtured beginning at a young age. Most jobs require some degree of creativity and flexible thinking.


As an admissions test for flight school for commercial airline pilots, the name-as-many-uses test was used because pilots need to be able to react quickly in an emergency, to be able to think of alternative approaches when systems fail. How would you put out a fire in the cabin if the fire extinguisher doesn’t work? How do you control the elevators if the hydraulic system fails?


Exercising this part of your brain involves harnessing the power of free association—the brain’s daydreaming mode—in the service of problem solving, and you want pilots who can do this in a pinch. This type of thinking can be taught and practiced, and can be nurtured in children as young as five years old. It is an increasingly important skill in a technology-driven world with untold unknowns.


There are no right answers, just opportunities to exercise ingenuity, find new connections, and to allow whimsy and experimentation to become a normal and habitual part of our thinking, which will lead to better problem solving.


Excerpt from THE ORGANIZED MIND: Thinking Straight in the Age of Information Overload. Copyright © 2014 by Daniel Levitin. Reprinted by arrangement with Dutton, a member of Penguin Group (USA) LLC, A Penguin Random House Company.



Absurd Creature of the Week: The Bird That Builds Nests So Huge They Pull Down Trees


jj

At some point, a tree becomes more nest than tree. That sounds like the kind of proverb that old guy at my local bar would tell me. Peter Chadwick/Getty



My father worked for over 30 years in construction, falling off of ladders and getting slivers of metal in his eye and generally bleeding profusely. He toiled like a maniac so our family could eat, all while furthering one of humanity’s most indispensable inventions: large-scale construction of shelter. From the most modest roof that my dad once nearly tumbled off of, to Dubai’s 2,716-foot Burj Khalifa, the tallest building in the world, nothing builds like a human.



For its size (and lack of opposable thumbs) though, Africa’s incredible social weaver surely comes close. These birds, about the size of the sparrows here in the States, come together in colonies of as many as 500 individuals to build by far the most enormous nests on Earth, at more than 2,000 pounds and 20 feet long by 13 feet wide by 7 feet thick. The structures are so big they can collapse the trees they’re built in, and so well-constructed they can last for a century, according to Gavin Leighton, a biologist at the University of Miami. Occupying as many as 100 chambers, these are quite possibly the biggest vertebrate societies centered around a single structure—outside of human beings and their skyscrapers, of course.


jj

The social weaver with some building material. Or is that a tiny cigar. I can’t tell. Gavin Leighton



Calling the semi-arid plains of Namibia and South Africa its home, social weavers make use of several different materials, building the nest by weaving in twig after twig. Then they line the insides of the chambers with luxurious grass and feathers and, occasionally, cotton balls that Leighton accidentally drops in the field (perhaps it’s their keen sense of symbolic justice—he uses the cotton after drawing blood from the birds for genetic sampling).