Scientists have found antibiotic resistance genes in the bacterial flora of a South American tribe that never before had been exposed to antibiotic drugs. The findings suggest that bacteria in the human body have had the ability to resist antibiotics since long before such drugs were ever used to treat disease.
The research stems from the 2009 discovery of a tribe of Yanomami Amerindians in a remote mountainous area in southern Venezuela. Largely because the tribe had been isolated from other societies for more than 11,000 years, its members were found to have among the most diverse collections of bacteria recorded in humans. Within that plethora of bacteria, though, the researchers have identified genes wired to resist antibiotics.
The study, published April 17 in Science Advances, reports that the microbial populations on the skin and in the mouths and intestines of the Yanomami tribespeople were much more diverse than those found in people from the United States and Europe. The multicenter research was conducted by scientists at New York University School of Medicine, Washington University School of Medicine in St. Louis, the Venezuelan Institute of Scientific Research and other institutions.
"This was an ideal opportunity to study how the connections between microbes and humans evolve when free of modern society's influences," said Gautam Dantas, PhD, associate professor of pathology and immunology at Washington University and one of the study's authors. "Such influences include international travel and exposure to antibiotics."
Intriguingly, in Dantas' lab, graduate student Erica Pehrsson searched for and found antibiotic resistance genes in bacteria on the skin and in the mouths and intestines of tribe members long isolated from such outside influences.
"These people had no exposure to modern antibiotics; their only potential intake of antibiotics could be through the accidental ingestion of soil bacteria that make naturally occurring versions of these drugs," Pehrsson said. "Yet we were able to identify several genes in bacteria from their fecal and oral samples that deactivate natural, semi-synthetic and synthetic drugs."
Thousands of years before people began using antibiotics to fight infections, soil bacteria began producing natural antibiotics to kill competitors. Similarly, microbes evolved defenses to protect themselves from the antibiotics their bacterial competitors would make, likely by acquiring resistance genes from the producers themselves through a process known as horizontal gene transfer.
In recent years, the abundance of antibiotics in medicine and agriculture has accelerated this process, stimulating the development and spread of genes that help bacteria survive exposure to antibiotics. Consequently, strains of human disease that are much harder to treat have emerged.
"We have already run out of drugs to treat some types of multidrug-resistant infections, many of which can be lethal, raising the bleak prospect of a post-antibiotic era," Dantas said.
Scientists don't really know whether the diversity of specific bacteria improves or harms health, Dantas said, but added that the microbiomes of people in industrialized countries are about 40 percent less diverse than what was found in the tribespeople never exposed to antibiotics.
"Our results bolster a growing body of data suggesting a link between, on one hand, decreased bacterial diversity, industrialized diets and modern antibiotics, and on the other, immunological and metabolic diseases -- such as obesity, asthma, allergies and diabetes, which have dramatically increased since the 1970s," said Maria Dominguez-Bello, PhD, associate professor of medicine at New York University Langone Medical Center and senior author of the study. "We believe there is something occurring in the environment during the past 30 years that has been driving these diseases, and we think the microbiome could be involved."
Dominguez-Bello said the research suggests a link between modern antibiotics, diets in industrialized parts of the world and a greatly reduced diversity in the human microbiome -- the trillions of bacteria that live in and on the body and that are increasingly being recognized as vital to good health.
The vast majority of human microbiome studies have focused on Western populations, so access to people unexposed to antibiotics and processed diets may shed light on how the human microbiome has changed in response to modern culture, and may point to therapies that can address disease-causing imbalances in the microbiome.
In the current study, when the researchers exposed cultured bacterial species from the tribe to 23 different antibiotics, the drugs were able to kill all of the bacteria. However, the scientists suspected that these susceptible bacteria might carry silent antibiotic resistance genes that could be activated upon exposure to antibiotics.
They tested for such activation, and the tests confirmed their suspicions. The bacterial samples contained many antibiotic resistance genes that can fend off many modern antibiotics. These genes may turn on in response to antibiotic exposure.
"However, we know that easily cultured bacteria represent less than 1 percent of the human microbiota, and we wanted to know more about potential resistance in the uncultured majority of microbes," Dantas said.
So the researchers applied the same method, called functional metagenomics, to identify functional antibiotic resistance genes from Yanomami fecal and oral samples without any prior culturing. From that experiment they were able to identify nearly 30 additional resistance genes. Many of these genes deactivated natural antibiotics, but the scientists also found multiple genes that could resist semi-synthetic and synthetic antibiotics.
"These include, for example, third- and fourth-generation cephalosporins, which are drugs we try to reserve to fight some of the worst infections," said Dantas. "It was alarming to find genes from the tribespeople that would deactivate these modern, synthetic drugs."
As for how bacteria could resist drugs that such microbes never before had encountered, the researchers point to the possibility of cross-resistance, when genes that resist natural antibiotics also have the ability to resist related synthetic antibiotics.
"We've seen resistance emerge in the clinic to every new class of antibiotics, and this appears to be because resistance mechanisms are a natural feature of most bacteria and are just waiting to be activated or acquired with exposure to antibiotics," Dantas said.
Funded by the C&D Fund, the Emch Fund, the Helmsley Charitable Trust, SUCCESS, NAKFI Synthetic Biology, a Washington University I-CARES award, the Diane Belfer Program for Human Microbial Ecology, an NDSEG graduate fellowship, a Howard Hughes Medical Institute Early Career Scientist Award, and grants from the National Institute of Diabetes and Digestive and Kidney Diseases and the National Institute of General Medical Sciences of the National Institutes of Health (NIH). NIH grants DK062429, DP2-DK098089, R01-GM099538 and UH2AR057506.
If you’ve ever tuned into the raw feed of a political press conference, you might have noticed that the language can feel…arcane? Obtuse? Intentionally obfuscatory? It’s not just you. The press and political spokespeople have evolved a strange sort of trade language that allows them to exchange, or avoid exchanging, actual news in an official briefing that you might otherwise think was nominally created for that purpose. And if you really want to see this kind of dialogue in action, you should check out spokespeople talking about defense and the military.
So that’s what makes a recent string of Tweets from Phil Ewing, an experienced defense reporter now writing for Politico, so great. With the internet still on fire thanks to the new Star Wars trailer, Ewing started tweeting about the defense spending policies and priorities of the Imperial Starfleet—the star destroyer-flying, snowtrooper-deploying, Death Star-building, Alderaan-destroying bad guys of the Star Wars universe. Imagine if Grand Moff Tarkin had a publicist.
The results were magical, a Dark Side skewing of both the weird ways the Empire prosecuted its war against the Rebellion, and the language crimes policymakers and their mouthpieces commit when they talk about the hard truths (and sometimes lies) of budgets and war.
Unfriended hits theaters today, and while it seems easy to reduce the movie’s elevator pitch down to “I Know What You Did Last Summer meets Paranormal Activity with webcams,” it’s really part of a much more important cultural tradition. The ever-expanding ranks of Internet Cautionary Horror Movies didn’t just appear one day fully formed, there’s a history here. The stage for Unfriended has been under construction for years, this weekend just marks the time director Levan Gabriadze’s movie gets to stand in the spotlight.
And these Internet cautionary tales encompass just one subgenre of horror, which has long been chronicling society’s ongoing battle with itself. Scary movies have always served as one of modern culture’s best time capsules. Using monsters as metaphors, horror films turn our actual fears into fantastically gruesome scenarios. Unfriended and its ilk simply reflect present-day anxieties about our lives online—just like teen slasher films tapped into our feelings about taboo topics like sex, drugs, and rock ‘n roll in the 1980s. And in every decade from the early 20th century to the present day, each installment in the genre gives us a fascinating window into the fears of our past, and therefore a greater understanding of our present.
Want to know how horror went from killers in the woods to killers on the web? Need a primer on all the ways our apprehensions about the Internet have manifested into big screen tropes? Then let’s get started, shall we?
First, Here’s a History Lesson
Following World War II, a national case of PTSD manifested in a crippling fear of The Other, which lead to movies like It Came From Beneath the Sea (1955), The Day the Earth Stood Still (1951), and even a movie just called Them! (1954). (Because if you’re not one of us you’re one of them, and that ain’t good.) The economy was riding high, but people dreaded what lay beyond our borders. They’re coming for our women, and they could be here at any time! And they could be GIANT MONKEYS! Considering this was pre-globalization and the world for most people was about as big as their surrounding neighborhood and local grocery store, it makes sense that the fear cinema of the day would be defined by paranoia.
Then came the Vietnam Era, and horror movies got a whole lot more horrible. Fabulous tales of King Kongs and Godzillas—now quaint by comparison—got swapped out in favor of a gritty, real-life aesthetic stirred up by widespread social unrest. For the first time, America was seeing unfiltered images of its own brutality. Technology had progressed to the point that TV was now a full-fledged industry. The cameras were rolling, and even if we didn’t know it at the time, they’d never turn off again. America the Hero had become America the Villain, and the images of atrocities pouring in from the front lines put the blood on our hands—a massive and depressing reversal from our identity as liberators in WWII. Movies like Last House on the Left (1972), Night of The Living Dead (1968), and The Texas Chainsaw Massacre (1974) were dirty, cynical, gruesome affairs playing to the disillusionment of an increasingly cynical audience.
That brings us to the 1980s, a funny time. People did a lot of coke, listened to Mötley Crüe, and watched tons of cheap horror. It was the golden age of the slasher movie (you have 18 titles to choose from in the Freddy, Jason, Michael, Leatherface cannon between 1980 and 1989) and while that’s not the best comment on credibility, we did get a lot of sex, drugs, and rock music. Maybe horror was trying to tell us something about the dangers of excess? Maybe everyone was just trying to piss off Ronald Reagan? Either way, well played, 1980s. That was fun.
And thus we arrive in the 1990s, which in terms of horror movies is the beginning of the era that would eventually give us Unfriended. The Internet and computers were moving out of military bases and into homes. Cell phones may have been enormous, but they were real, and even Zack Morris had one in Saved By the Bell. It’s during this time that the distributary of Internet Cautionary Horror really starts breaking off from the surging blood river of scary movies, and it can be broken down into three phases.
The Internet Will Kill Us
In the beginning, there was fear. Personal computing and Web 1.0 made globalization an in-home event. We could see the technology infiltrating our lives, but we didn’t know what it was capable of. Looking back on the movies of the day, we see a future in which the entire world is online, and we don’t just mean texting. The coming age would exist in cyberspace, and the landscape would be littered with eye phones, not iPhones.
In 1992’s Lawnmower Man, virtual reality experiments turned a quiet, developmentally challenged landscaper named Jobe (Jeff Fahey) into a highly functioning and weaponized human-machine hybrid. “Nothing we’ve been doing is new,” Jobe tells his “creator” Dr. Lawrence (Pierce Brosnan). “We haven’t been tapping into new areas of the brain; we’ve been awakening the most ancient. The technology is simply a route to powers that conjurers and alchemists used centuries ago.”
Dr. Lawrence protests, warning Jobe that “Man may be able to evolve 1,000-fold through this technology but the rush must be tempered with wisdom!” Sorry, Doc, but the floodgates of technology are open and we’re all about to get drenched.
As the decade progressed, the advent of Internet cautionary tales turned technology into a pandemic, and VR was the transmission mode of choice. According to Lawnmower Man 2: Beyond Cyberspace (1996), the connected world would enslave us under maniacal demigods like Jobe. Virtuosity (1995) told us we’d eventually be 3-D printed sociopaths. And if you bought what Johnny Mnemonic (1995) was selling, electronic components themselves would turn the air into invisible poison.
It wasn’t until the all-important The Net in 1995 that fear of the web would be brought back down to Earth. Internet horror had outgrown its speculative VR playground and moved to a much more realistic domain where people were the enemy instead of prototypes. Phase two showed us that the trouble wasn’t the coming dystopia. It was the stranger next door.
People on the Internet Will Kill Us
The early 2000s were a formative time in web-based scary movies. Think of it as an adolescence: awkward, lanky, and stuck between its youthful naïveté and the eventual self-awareness of adulthood. In other words, it was a rough time.
Now that personal technology was approaching ubiquity, we understood that, at least for a while, the world wasn’t going to turn into a digital prison that gave people the “black shakes” with toxic airwaves. Phew! The Internet by itself wasn’t going to kill people, but as the The Net taught us, people sure as hell were going to use it to do a whole lot of bad, which materialized in online sport killing. The time of the Internet Killer was upon us.
Feardotcom (2002), Halloween: Resurrection (2002), Dot.Kill (2005), and Untraceable (2008) all featured serial killers broadcasting their murderous deeds to craven audiences. Our relationship with technology—particularly the rise of the webcam—was running headlong into America’s post-9/11 appetite for torture porn, and the results were gruesome. The internet was turned into a weapon, and by subscribing to livestreams users were wielding it in conjunction with the murderers themselves.
“Reducing relationships to anonymous electronic impulses is a perversion,” Alistair Pratt (Stephen Rhea) tells us before he cuts someone open in Feardotcom. “The Internet offers birth, sex, commerce, education, proselytizing, politics, posturing. Death is a logical component. An intimate experience made more so by knowing the victim.”
The 2001 Japanese movie Pulse (remade in the States in 2006) incorporated elements of these other entries from the early 2000s, but subbed in a supernatural force for a meat-based murderer. Most importantly, though, this second phase of online horror was when we started taking responsibility for our actions online—or at least taking responsibility for the actions of someone else. Violence was still a result of outlier figures that played to our base instincts. We had some deniability left to grab hold of, but as phase three approached, it was becoming clear that the scariest thing online wasn’t some unknown horror. It was us.
We Will Kill Each Other on the Internet
By 2010, cell phones, FaceTime, Skype, laptops, iMessage, Facebook, Twitter, and all our other tools had permeated every facet of the developed world. The ubiquity of tech forecast in Johnny Mnemonic had come largely to pass (though in a less gross way) and the ability to actually live inside the Internet was a real thing thanks to globe-spanning role-playing games. We hadn’t been enslaved by Jobe or fallen victim to a frequency-based disease, but that meant we were running out of bad guys to blame. Maybe the problem wasn’t a scary series of tubes or a creepy guy with a webcam in a hovel. Maybe the scariest part of the Internet was learning that anyone could be a villain.
Such is the case in Unfriended. A campaign of online bullying, surely considered benign by those administering the persecution, results in the suicide of a young girl named Laura Barnes. On the anniversary of her death, the girl returns to haunt a handful of her classmates during a Skype hangout. Laura, like Jobe, has entered the machine, and if her former tormentors don’t comply with her demands, people are going to die. But she’d probably kill them anyway, even if they did listen, because Laura—or her ghost, or her proxy—has become the senseless Internet monster we’ve learned to fear as each kill is broadcast for everyone else to see. But all of this could have been avoided if Laura hadn’t been pushed over the edge by her peers, kids who she saw every single day—not masked killers, not psychopaths, just classmates.
While Lawnmower Man may have made us anxious in 1992, the progression of online horror has moved from global to local to very intimate confines, with the tight frame of the webcam emphasizing how close we are to that which we fear most. So if you go see Unfriended this weekend, make sure to consider the road that got us here in the first place. It doesn’t make for the most uplifting of movie marathons, but it’s one hell of an effective cautionary tale.
This week we got more details about Microsoft’s partnership with Cyanogen. The two companies plan to deliver Android devices pre-loaded with made-in-Redmond apps and services. This isn’t just a big deal for Microsoft—which will now be able to expand its mobile services reach far beyond the confines of Windows Phone—but it’s also gives us a sketch of a possible Android future that’s further outside of Google’s grasp. Also: cool new phones! And we’re way into that. The hosts discuss the possible outcomes for the mobile industry. Also on this week’s show, David and Michael consider the future of Best Buy, a retailer that’s seemingly stuck in the past. Lastly, some critique of Twitter’s new home page, which just got a facelift.
“White spaces,” or unused radio frequencies in between TV channels, have long been eyed by technologists as perfect for connecting a sea of countless devices to the internet—everything from heart monitors to your car. A little-known government database is supposed to help prevent America’s newest “white spaces,” or “super Wi-Fi,” wireless devices from interfering with other electronics.
But that database has been invaded by a group of sketchy characters going by the names John Q Public, Sue Q Public, NoneNone and John Doe. Some of them hail from 123 Jump Street. On May 1, the Federal Communications Commission is giving the public a chance to comment on how best to deal with these suspicious characters. What the FCC does about it could affect the evolution of the emerging Internet of Things.
Robert M. McDowell
Robert M. McDowell served as a Commissioner of the Federal Communications Commission from 2006 to 2013 and was an ardent supporter of unlicensed uses of television “white spaces” for new technologies. He is currently a partner at Wiley Rein, LLP and a Senior Fellow at the Hudson Institute, a non-partisan think tank. He can be found at @McDowellTweet.
During my seven years as an FCC commissioner, I was a strong proponent of allowing innovators to use white spaces without having to get an FCC license. The TV frequencies are highly coveted because they can carry large amounts of data over long distances while penetrating buildings. Enabling consumers and technologists to take advantage of these radio bands in an unlicensed manner is seen as the epitome of the “permissionless” internet. Innovation could spread quickly without having to wait for government approvals.
Our template for success was the first generation of Wi-Fi: it seemed almost as though no one had heard of it on Friday, but by Monday everyone was using it. Its unlicensed nature unleashed a beautiful explosion of entrepreneurial brilliance. Super Wi-Fi operating in the TV bands would be even better.
Under federal law, however, devices using white spaces cannot cause harmful interference to licensed users, such as TV broadcasters and some theaters and churches using wireless microphones. This requirement makes sense to ensure electronics don’t drown each other out, turning them into junk. But how do you ensure such devices won’t step on each other’s toes?
With a reliable national database of all devices using the TV white spaces, of course.
Why We Need a Reliable National Database for White Space Devices
The emerging Internet of Everything is estimated to generate up to $14.2 trillion in new global economic activity by 2030, including $7.1 trillion for the U.S. As everything from our home to our cars and our medical devices become connected to the Net, keeping it all functioning and thriving will require gobs of new spectrum. The TV white spaces provide the wireless “oxygen” needed to allow our tech economy to breathe. (Using of some of the federal government’s spectrum offers even more potential, but that’s another story.)
To make it all work, however, devices can’t interfere with one another – hence the need for a flawless database.
Since the FCC formally opened the door to this new world in 2002, years of prototype testing by the FCC’s excellent engineers ensued, as did numerous proceedings that sought public comment. Finally, in 2011, the FCC created a framework that insulated licensed users from harmful interference while opening a new frontier for America’s tech economy.
All of the FCC commissioners, including myself, concluded that the key to moving forward with this new win-win model for tech innovation was having in place a high-quality national database that would precisely keep track of who had the new white-space devices and where they were located. Location accuracy would be especially important as America pushes to maintain its leadership role in perfecting the technologies of the future, such as driverless cars.
Trouble at 21 Jump Street
As recently reported in Re/code, however, new studies are revealing that up to a third of the information in the database may be inaccurate or just plain made up.
For instance, in addition to numerous “John Does” and “John Smiths,” studies conducted on behalf of the National Association of Broadcasters reveal that more than 80 devices are registered in the database under the name “Meld test.” Others are registered as being located at “123 Jump Street” or have contact phone numbers such as “(999) 999-9999”. One device is registered to a spot in the Atlantic Ocean about 500 miles off the coast of Cameroon.
These falsehoods aren’t just humorous isolated errors; they number into the hundreds. And they show a cavalier attitude towards maintaining the integrity of the cornerstone of our next-gen tech economy.
How We Fix It
The FCC should meticulously reverse engineer its flawed database and reveal how its vendors bungled their fundamental task. Hollow promises of “we’ll fix it” are not enough. The purpose of a deep-dive investigation shouldn’t be to assign blame, necessarily, but to learn from mistakes to create a more perfect system. The rest of the world is watching and may emulate what we do. Additionally, the FCC should lay out a detailed plan, with an opportunity for public comment, on how to make the database perfectly efficient and transparent to ensure this debacle doesn’t happen again. Going forward, we can’t rely on after-the-fact studies to tell us how broken this crucial system is. America’s technologists need the peace-of-mind that their ingenious creations will actually work. And America’s consumers deserve no less.
Only about 600 devices are in use right now, but in a few months, a massively important FCC spectrum auction will open up even more white spaces. The FCC should hurry to fix its flawed database before then.
If the FCC can’t make the database work, not only will tech innovators and broadcasters alike be harmed, but America’s consumers will lose. Either way, the FCC should act fast and give the tech sector a pathway to success.
The big winners in TV this week were musical acts and, for the second week in a row, Game of Thrones. James Corden re-upped Carpool Karaoke, which was already off to a solid start with Mariah Carey, and brought the house down with Adele’s closest competition for Voice-of-God-on-Earth, Jennifer Hudson. And even though she didn’t sing a note, Anne Hathaway put down the definitive interpretation of Miley Cyrus’ “Wrecking Ball” on Jimmy Fallon’s new show, Lip Sync Battle. Sorry, Emily Blunt, but you stood no chance. Elsewhere, Jimmy Kimmel got the whole Avengers band together for one big happy visit and Billy Crystal reminded us why we keep giving Billy Crystal a chance. So pour yourself a glass of your favorite something and let’s toast with Conan O’Brien to the finale of Justified, and to this week’s best TV.
The Late Late Show with James Corden—Jennifer Hudson Carpool Karaoke (Above)
Nothing is better than this. Jennifer Hudson’s voice is proof of God, because Jennifer Hudson believes in God and someone gave her these gifts and it must be God because who else is giving voices like this away?! So there you go. Sorry, atheists, but we’re going with JHud on this one.
Lip Sync Battle—Anne Hathaway’s ‘Wrecking Ball’ vs. Emily Blunt’s ‘Piece of My Heart’
Late Night with Seth Meyers—Taraji P. Henson on Improvising Cookie Lyon’s Insults
Taraji P. Henson is having Tom Hanks levels of fun in Hollywood right now. Bless her LOLing heart.
The Tonight Show Starring Jimmy Fallon—Magician Dan White Performs a Trick Using the Tonight Show Crowd
Saturday Night Live—Game of Thrones
Brienne of Tarth, you just got some competition for Baddest Bitch in Town.
Jimmy Kimmel Live!—Avengers Family Feud
Jimmy Kimmel did his best Jimmy Fallon impression this week and ran a majorly A-list edition of Family Feud with the Avengers. The net worth of this bit is mind-boggling—much like the general effect of gazing upon Chris Hemsworth.
The Late Show With David Letterman—Billy Crystal’s Musical Tribute to David Letterman
Kind of creepy, constantly self-deprecating TV institution David Letterman is about to step away from the Late Show gig for good, but who better to be wishing Uncle Dave farewell in the form of song than that old so-and-so, Billy Crystal? Here’s two old comedians and a lot of memories being put to bed.
The Daily Show with Jon Stewart—Billy Crystal
As Billy Crystal preps for the premiere of his new show The Comedians on FX, he has apparently also been tasked with sending off retiring fellow funnymen of a certain age. The genuine affection Stewart and Crystal have for one another is completely endearing, and we can only hope that if they really do get stoned and go to a supermarket together, that we happen to be doing our grocery shopping at the same time.
Conan—Timothy Olyphant Raises a Toast to Justified
And with that, we raise our glasses in solidarity. Farewell, Justified! We’ll always have Harlan. (And we’ll always have Timothy Olyphant’s sweet, handsome face.)
Late Night with Seth Meyers—Carice Van Houten Is Still Surprised by Game of Thrones
So Carice Van Houten’s real life resting face isn’t significantly more inviting than that of Melisandre, her character on Game of Thrones, but at least she seems to have a charming sense of humor about the intensity of the show. She also apparently got super psyched about contributing voice work to The Simpsons and wants more than anything to be immortalized on a pinball machine. Well alright then! More Carice Van Houten, please.
Scientists at the University of Veterinary Medicine Vienna investigated whether stomach ulcers in cattle are related to the presence of certain bacteria. For their study, they analysed bacteria present in healthy and ulcerated cattle stomachs and found very few differences in microbial diversity. Bacteria therefore appear to play a minor role in the development of ulcers. The microbial diversity present in the stomachs of cattle has now for the first time been published in the journal Veterinary Microbiology.
Gastritis and stomach ulcers in humans are often caused by the bacterium Helicobacter pylori. But other factors, such as stress and nutrition, also play a role in stomach health. In cattle the weather and husbandry in general play an additional role. The etiological role of bacteria in abomasal ulcers was investigated by veterinarian Alexandra Hund of the Clinical Unit of Ruminant Medicine together with microbiologist Stephan Schmitz-Esser of the Institute for Milk Hygiene.
"The abomasum is the last of the four stomach compartments in cattle. The three other compartments, the rumen, the reticulum and the omasum, serve to predigest the food. The abomasum is the actual stomach and is similar in anatomy and function to the human stomach. Painful gastritis and ulcers can occur in the abomasa of cattle, potentially weakening the animals, leading to perforations of the stomach and possibly even to cases of death," first author Alexandra Hund explains.
Microbial communities of healthy and ulcerated stomachs nearly identical
Microbiologist Schmitz-Esser analysed stomach samples from slaughter cattle. Around half of the samples were taken from healthy cattle, the other half from cattle with low-grade abomasal ulcers. "Very sick animals are barred from slaughter," says Alexandra Hund.
The researchers isolated and sequenced the bacterial DNA from the stomach samples. The DNA sequences were then used to determine the type of bacteria present. "The most common were species of Helicobacter, Acetobacter, Lactobacillus and new strains of Mycoplasma. The bacterium Helicobacter pylori, commonly found in humans, was not present at all. We nearly saw the same bacterial composition in healthy and ulcerated animals, which suggests that bacteria only play a minor role in the etiology of abomasal ulcers," says Schmitz-Esser. "However, this is something we would like to underpin in future studies."
Different bacteria in calf stomachs
Calf stomachs contain a relatively immature microbial biomass. This means that bacterial diversity must still develop. The primary bacteria found in calf stomachs were beneficial lactic acid bacteria. These bacteria enter the stomachs of calves through the milk that forms their main source of nutrition.
Abomasal ulcers difficult to detect
"Due to the very subtle symptoms of abomasal ulcers, they are very difficult to diagnose for non-experts. The abomasum is the last of the four stomach compartments and therefore not accessible to gastroscopy. We are currently working on a method for the early and rapid diagnosis of those ulcers. In any case, keeping cattle stress-free is one way of preventing stomach ulcers," Alexandra Hund recommends.
With the announcement that Microsoft would partner with the truly open-source, Android-based Cyanogen OS to provide a bundled suite of apps, both companies made one thing very clear: Android’s not just for Google anymore.
The partnership, as detailed by Cyanogen yesterday, will allow the budding mobile OS to preload Microsoft apps like Outlook, Office, Skype, Bing, OneDrive, and OneNote. The subtext here is that these apps can act as a replacement for the ones that Google appends to its Android releases, such as Gmail, Maps, Hangouts, and more.
Google’s obviously not the only company to preload phones on its platform with home-grown software; every iPhone comes with dozens of apps installed long before you ever power it on, and Windows Phone devices ship with plenty of Microsoft-made live tiles in place. But the increasing creep of apps you can’t uninstall, regardless of whether you want or need them—or if there are better alternatives out there—is one of the motivating forces behind the all-open-everything Cyanogen business model.
Dissociating Android from Google sounds great in theory but leaves several gaping holes in the user experience—holes that Cyanogen will now attempt to fill with bizarro-world Microsoft counterparts. Importantly, though, Cyanogen OS won’t shove Microsoft-owned Skype down your throat; according to spokesperson Vivian Lee, the apps will be “surfaced contextually,” meaning they’ll be presented as an option when it seems like they might be helpful, but you’ll also be welcome to use whatever else instead. You can also uninstall them at will, unlike the unkillable apps tethered to Apple and Google devices.
Lee also confirmed to WIRED that the partnership won’t affect existing devices, meaning a future update won’t mess up your your OnePlus One workflow by swapping your Google apps for a Redmond imposter.
For Cyanogen, the benefit is clear: Choice is its best point of differentiation. But it also doesn’t mean much without a wide variety of options from which to choose. The Microsoft deal is just one (albeit large) step towards having as many partners on board as there are mobile developers. “Cyanogen is committed to opening up Android.” said Lee, “[It’s] predicated on user choice as an operating system.” The defining ethos here isn’t that Microsoft alone will act as an anti-Google; it’s that Microsoft will help populate the broadest mobile ecosystem available, an expansive nature reserve next to everyone else’s walled gardens.
What’s In It for Microsoft
The more interesting question might be what Microsoft gets out of the arrangement. After all, it has its own mobile platform to worry about in Windows Phone, which nearly five years after first launching still hasn’t made an appreciable dent; according to the most recent Comscore numbers, it ended January with a US market share of just 3.6 percent.
That failure to gain traction may be why Microsoft has recently embraced a push to put its software on its more popular rivals. Outlook launched earlier this year on both iPhone and Android, while its Office suite went free on iOS and Android last November.
What’s even better than trying to establish an app beachhead in highly contested territory, though, is becoming the default app on a relatively new platform with lots of potential for growth. By working closely with Cyanogen, Microsoft now essentially has its own Android OS, which gives it a potential reach far greater than its own homegrown platform has found so far.
No Hardware, No Cry
The best part is that Microsoft won’t have to rely on its own devices to succeed. Lee says there are “no plans” for Microsoft Cyanogen hardware at present. Even so, any OEM that wants to hedge against Google’s increasing dominance without sacrificing the Android experience will have to at least consider Cyanogen OS, especially after the breakout success of the OnePlus One. Even if that only means a handful of low-cost devices for the time being, those are potential Bing and Skype and Outlook users that Microsoft would have otherwise been unlikely to reach.
One last wrinkle worth mentioning? Thanks to a trove of patents, Microsoft has Android licensing agreements that amount to billions of dollars of revenue every year, including a billion from Samsung in 2013 alone. Presumably as Android proliferates in whatever form, so too will Microsoft’s potential patent profits.
That’s a lot of upside with not much to lose, especially given the recent cross-platform push. And an arrangement like this makes more sense than the $70 million investment Microsoft was rumored to make back in January. Cyanogen doesn’t have to feel beholden to one software suite, and Microsoft limits its financial exposure and Windows Phone conflicts.
It’s going to be a while before we see products that realize the vision Cyanogen and Microsoft have laid out, and even longer before Cyanogen OS becomes more than a product that floats on the margins. But the news gives legitimacy to the idea that iOS might not be Google’s only serious competition for long. A more open Android is on the rise, and Microsoft just provided a powerful updraft.
Much has been made about the mobile “revolution” in the developing world, the way that smartphones have enabled the citizens of so many poorer countries to leapfrog into the 21st century without having to bother with all the awkward technological steps in between.
It’s that mentality that’s driving the development of Facebook’s internet-connected drones and Google’s internet-connected balloons. The thinking goes that because so many people in the developing world are buying smartphones (and they are), all they need is access to the internet, and they’ll be well on their way to becoming full, equal participants in the global economy.
We’re now used to always being connected, and that's dangerous. Rich Fletcher, MIT
In some ways, the mobile-plus-internet combo has the potential to deliver on its promise. There’s a lot—and increasingly more—that you can do on a smartphone. But then again, think of all the things you can’t, or, at the very least, that you just wouldn’t want to—like draft a presentation, populate an Excel spreadsheet, or write this story. When you think of it that way, all this talk of what people in the developing world can accomplish if only they had a mobile phone and an internet connection can seem a bit, well, patronizing.
As it turns out, people with less might actually want more.
“They want the same things you and I have, and not just because we have it,” says tech entrepreneur Matt Dalio. “They want the same things you and I have for the same reason you and I have it.”
Which is precisely why Dalio founded Endless, a startup that has developed a PC and operating system for the developing world. Endless launched a Kickstarter project for the device this week, but the campaign is mostly for marketing, since the team has spent the last three years developing the technology and testing it with users throughout the developing world. Now, Endless wants to expand that reach even further.
Not Waiting for A Connection
The hardware itself is a small, egg-like device that can plug into any television and turn it into a computer screen, giving people instant access to a desktop computer for just $169. This price point means, initially, Endless is not targeting the bottom of the pyramid, but the emerging middle class within these countries that may be able to afford a device like this.
But the real innovation is not the device itself. It’s the operating system, which Endless built from scratch, specifically for people who have limited experience with computers and who don’t always have a reliable connection to the internet. Designing it required spending a huge amount of time on the ground, in countries like India, Guatemala, and Bangladesh, testing out the technology with users. It was that process that not only convinced Dalio that mobile technology was an incomplete solution for the developing world, but also helped him understand that the Endless team would have to completely rethink the way a computer should operate in order to succeed.
For starters, Endless had to address the lack of connectivity in these countries, an issue which companies like Facebook and Google are actively seeking to address, but which will take years, if not decades, to complete. So, the Endless team took a cue from the early days of PCs by loading the devices up with more than 100 apps, including things like Khan Academy, encyclopedias, health apps, and more, which work both online and off. “We thought, we can’t give them better connectivity, but what we can do is solve it in the way we used to solve it before we had internet, and that was to have something like Encarta,” he says, referring to Microsoft’s digital encyclopedia, which was popular in the 90s.
Off the Grid
According to Rich Fletcher, a research scientist at MIT’s D-Lab, it’s this offline capability that distinguishes the Endless PC from other similar technologies that have failed to make this type of technology work in the past. “We’re now used to always being connected, and that’s dangerous,” Fletcher says. “Having a local cache or server that lets you use apps that don’t require full-time connectivity is really important.”
Still, Fletcher says Dalio and his team may be underestimating the extent to which people in the developing world want “the same things we have,” and not an adaptation of them. “If the people in New York and Boston aren’t using this Endless computer, people in the developing world are going to be very cautious,” Fletcher says. “They’re going to say, ‘What is this? Why doesn’t my cousin in New York have one, and if he doesn’t want it, why should I?'”
Then, there is the question of electricity. Though Endless is targeting a market segment that generally has electricity and modern appliances, Fletcher warns that in many parts of the world, the grid is less than reliable. “Just like always-on connectivity, you cannot assume always-on electricity.”
For that matter, you can’t assume that Endless will succeed at all, or Facebook’s drones or Google’s balloons. But one assumption that is safe to make is that users, no matter where they live, won’t be content with a second-class experience. Mobile might be good enough for lots of things. But it isn’t everything.
Skip to story The Aiaiai TMA-2 headphones let you pick your own drivers, earpads, headband, and cables. They also come in artist-produced configurations, like this Young Guru preset. Aiaiai
In addition to having one of the most fun-sounding names in the business, Aiaiai cranks out some mighty fine headphones. The Danish company’s previous top-of-the line cans, the understated but fashionable TMA-1s, sounded as good as their matte aesthetics looked. Parts of the TMA-1 were modular—you could swap out the on-ear pads for some over-ear noise-isolators, for example—but the new TMA-2s take that modularity to bold new heights.
You essentially build these headphones from scratch: You select your speaker unit, your headband, your earpads, and your cable type. When you get the box, it’s filled with each component in its own little sealed bag, and you just pop it all together. There are 18 distinct components and 360 possible combinations of them.
As cool as that is, so much choice can be a bit overwhelming. Luckily, Aiaiai has made it easy to assemble the ideal pair of headphones for the music you want to listen to and the sound characteristics you want out of them. Just like the headphones themselves, there are tons of ways to piece together that info, and they’re all pretty helpful.
You can mix together your headphones from scratch using a configuration tool on the Aiaiai site. Aiaiai
Along with an interactive configuration tool that lists details about sound quality for each combo, a roster of “preset” headphones based on particular use cases, configurations put together by musicians, and configurations suited to specific genres of music.
So what are your actual options? For the speakers, you have your pick between four 40mm titanium and neodymium drivers, each one of them tuned for a different sound profile (using vague audiophile-speak like “all-around,” “punchy,” “warm,” and “vibrant”). No matter which drivers you pick, you can pair them with a set of microfiber or leather earpads, with both on-ear and over-ear styles available. There are three different headband styles with different levels of padding, and six different cord styles—you can get a curly or straight cable with or without inline controls, microphones, or adapters.
As you might expect, prices vary depending on your configuration, but they’re each around the $200 mark. The highest-end combo would cost $250, with the “Vibrant” voice-coil driver, a “High Comfort” padded headband, over-ear leather earpads, and a cable with all the fixins. The cheapest combo is just $140, with the “All-Round” driver, a headband with slim padding, microfiber on-ear pads, and a basic cable. And of course, you can upgrade these cans over time, as all the parts are interchangeable.
What’s in a name?
In the video above, first aired in the U.K. in 1980, the host walks us through a number of ’80s-era electronic products, such as a boom box and a VCR, noting that they were all made by one of the world’s leading manufacturers. In the end he reveals that the products weren’t made by Sony or any other household name, but by Samsung, then a relative unknown in the West.
“You pay for our product,” he says, “not our name.”
At least not in Europe or the US. But by the ’80s, Samsung was already a decades-old company. In fact, as documented on its corporate website and in a lengthy TechCrunch profile, Samsung’s history stretches back to the 1930s—back when the smartphone giant of today was all about groceries.
A son of privilege, Lee Byung-chul founded a company called Samsung Sanghoe in 1938. He started by exporting fish, vegetables and the company’s own brand of noodles to China. From there, the company expanded into milling flour. After a serious setback during the Korean War, Lee rebuilt the company and diversified into a wide range of industries, including manufacturing and construction.
But it wasn’t until 1969 that the Samsung Corporation launched its Samsung Electronic subsidiary, first offering black and white televisions and later expanding into the refrigerators and other consumer electronics that ultimately made the company famous worldwide.
By the early 1990s Samsung was already one of the largest semiconductor companies in the world, and its consumer electronics line had finally become internationally known. But it wasn’t until 2010 that the company launched the product it’s most closely associate with today: the Samsung Galaxy S smart phone.
The Galaxy S wasn’t the company’s first smart phone, but it was the device that propelled the brand to the forefront of the smart phone race and ultimately pitted them against Apple, one of the company’s most important customers.
Most Apple devices have at least a few Samsung components, even if the company’s name isn’t attached to the gadgets themselves. These days if you buy an iPhone, you are literally paying for Samsung’s products rather than its name.
Skip to story Notaro performs at one of the shows featured in Knock, Knock, It's Tig Notaro. Showtime
In the years since her stunningly honest breakthrough standup album Live, Tig Notaro has been busy. The veteran comic has been a guest star on Transparent and a writer on Inside Amy Schumer. A documentary on her life since being diagnosed with breast cancer—the very news that sparked the performance that became Live—premiered at Sundance, and she’s got an hour-long HBO comedy special coming later this year. But between the acutely personal documentary and the victory-lap performance, she’s got one more trick up her sleeve, and it’s a hybrid of the two: Knock Knock, It’s Tig Notaro, which follows Notaro and fellow standup Jon Dore on a weeklong tour and airs on Showtime tonight.
Instead of touring comedy clubs in major cities, Notaro solicited fan suggestions to host shows in non-traditional venues, ultimately narrowing her itinerary down from more than 1,000 submissions: A geodome in Topanga, California; a lake house in rual Indiana; an abandoned warehouse in East Nashville, Tennessee; a flatbed trailer in Pluto, Mississippi, a town with a permanent population of fewer than 10 people. At the outset of the special, when perusing video submissions with Nick Kroll, Notaro recounts a few of the places she’s performed over her 17 years in standup—from a barn to a barbecue with a crowd of seven. “In front of a bunch of gators?” Kroll asks. “No,” Notaro deadpans, “I’ve only been performing for 17 years.”
Exit the Stage, Enter the Spontaneity
It’s a tour model Notaro has used before, in part because it gives rise to encounters that she calls “endlessly amusing.” Without the artifice of a theater or even a designated stage, a far more intimate connection arises between performer and audience, something that fascinates Notaro. And the feeling of spontaneity that permeates every bit of standup featured in Knock Knock wasn’t just born of the tour setup. After the rigors of cancer treatment, Notaro didn’t have enough material prepared for an entire set, but found that supplementing the routine with crowd work had two benefits: “it was a way to search for new material, and also make each experience special.”
The approach makes the show less about the jokes themselves, and more about how Notaro and Dore interact with each specific audience—like Notaro zeroing in on a woman in Indiana who can’t stop laughing at her impression of a clown horn. That’s markedly different from standup’s prevailing sensibility, in which the performer treats anyone making comments as hecklers. Notaro doesn’t snap at anyone who engages, but instead actively encourages the interaction, leading to moments like one during the Topanga performance when a woman offers a suggestion for a punchline, and Notaro deftly demonstrates its inferiority, milking laughs from the backyard audience throughout. Comics like Aziz Ansari have experimented with breaking the fourth wall in a similar way, but while Ansari does it for raw material, Notaro uses it because she’s trying to create a comfortable atmosphere. “I consider the fact that people have saved up money for a babysitter,” she says, ” or that they had a rotten day or they drove three states to see the show.” In her perfect world, everyone goes home happy.
At times, Knock Knock mirrors the well-worn structure of other tour documentaries like Kings of Comedy or Jerry Seinfeld’s Comedian: Notaro and Dore stage arguments in the car as they drive through the country; they poker-face their way through a joke about turning the the documentary into something vastly more serious; basically, they joke about being jokers. But Notaro insists that Knock Knock and Tig couldn’t be more different: “One is about my life over the past two years, and one was filmed on a tour for a week.”
Yet, despite Notaro’s insistence that the Showtime special is far more lighthearted, there are moments of darkness, especially when her illness pokes through the veneer. During her run at Second City’s UP! Theater in Chicago, Notaro was hospitalized, leading to Dore stepping in as the headliner. He adapts with aplomb, and a subsequent scene in Notaro’s hotel room after her hospital stint glosses over the scare, but the very real threat of severe illness lingers.
And the best scene in the entire special has almost nothing to do with standup. At Sutton’s Monuments, a combination fireworks stand and granite monument company in Notaro’s birth state of Mississippi, she and Dore browse a selection of headstones. So close to Notaro’s hospital scare, it’s a breathtaking moment of tragicomedy, as the two bicker about whether a plain flat stone would be enough, or if they should splurge on something more visible.
When Louis C.K. first debuted the recording of Notaro’s Live on his website, he said that Notaro “took us to a scary place and made us laugh there.” Though it’s not the central idea in her comedy, it has been her trademark response in the face of paralyzing tragedy. And that willingness to wade through those difficult topics with disarming goofiness has made her all the more endearing. Knock Knock, It’s Tig Notaro might not be on the same level as her breakthrough album, or as essential as the documentary that cataloged her life since that point, but it’s yet another demonstration of her unique talents as a comedian: daring enough to take her craft outside of comedy’s comfort zone, and into the lives of everyday people in order to connect with them.