Where to Watch Livestreams of Apple’s Big Thursday Event


Members of the media and Apple employees wait outside Apple headquarters before the 5s announcement last year. Tomorrow, we will be back in the same location preparing to give you up to the minute coverage.

Members of the media and Apple employees wait outside Apple headquarters before the 5s announcement last year. Tomorrow, we will be back in the same location preparing to give you up-to-the-minute coverage. Marcio Jose Sanchez / AP



If you’re getting a sense of deja vú, don’t be alarmed: Yes, Apple just had an event last month. But tomorrow we’re headed back to Cupertino, where the company is set to unveil new iPads, new iMacs, and OS X Yosemite.


For those who want to follow along in real time, Apple will once again be livestreaming the proceedings. Here are all the ways you can tune in.


When do the festivities begin?


The event kicks off at 10 am Pacific/1 pm. Eastern on Thursday, October 16. You can add a calendar event if you don’t want to forget about the livestream once you get wrapped up in work Thursday morning.


Is there a live stream of the event I can watch?


Why yes, there is! Just go to http://ift.tt/1t4I2sK, and Apple should be streaming video footage from its Cupertino headquarters. At the iPhone event last month, there were some technical streaming difficulties, so if you’re a die-hard Apple fan we recommend tuning into a live blog (like ours!) for a feed of the news as well.


What’s the catch?


Yeah, this is an Apple event. That means all the usual unnecessary hardware and software restrictions apply.Technically, the video stream will only work on Safari 5.1.10 or later, on a Mac running OS X 10.6.8 or higher. Apple doesn’t officially support streaming of its video to Windows PCs, Chromebooks, Android phones, or anything that doesn’t sport its “Designed by Apple in California” moniker.


If you’re on an iOS device, Apple’s event page says that you can watch the video in mobile Safari on devices running iOS 6 or later. You can also stream it on second- or third-gen Apple TVs with firmware version 6.2 or later.


But what if I don’t own an Apple device and want to watch?


If you’re up for some experimentation, you can try a user-agent-string spoofer for Firefox, Chrome, Internet Explorer, or Opera to give the appearance you’re using Safari. If you’re on an Android device, there are also iPhone and iPad user-agent spoofers available for its Chrome browser.


These are the same tech specs that were required at September’s iPhone event, so however you watched last time should work again this time around.


Also, we’ll be there


We will be live blogging and tweeting the apple event if you are unable to watch the live-stream.

We will be live blogging and live tweeting the apple event if you are unable to watch the live-stream. Ariel Zambelich / WIRED





We recommend following WIRED’s own live coverage. It’s friendly no matter what platform you’re on, and comes in several different varieties (because not everyone wants, or has the luxury, to keep track of the second by second happenings of the event).

For as-it-happens action, tune your browser to our liveblog. You’ll find the link at the top of WIRED’s homepage Thursday morning. Christina Bonnington (@redgirlsays) will be reporting live from the event in Cupertino. We’ll also share in-depth story coverage on Gadget Lab’s Twitter feed (@gadgetlab), WIRED’s main Twitter feed (@wired), and on the WIRED homepage throughout the day.


Woo! I’m going to the event!


Rad! Try to find a friend to carpool with. Bay Area traffic can be a nightmare and Apple isn’t walking distance from the nearest Caltrain station. Once you’re there, go give Christina a high-five and a coffee. She’ll be the reporter on her iPhone sitting in line waiting to get inside.



How Crowdsourcing Will Help Startups Build Their Own Versions of Siri


Wit.ai currently offers a demo of their product online.

Wit.ai currently offers a demo of their product online. Wit.ai



Speech recognition is hard, even for the world’s largest tech companies. Apple and Google draw on massive collections of recordings of real speech patterns to help tune the voice recognition algorithms that power Siri and Google Now. And even though those tools are impressive, they still spend an awful lot of time mangling your voice commands.


Building speech-powered applications is even harder for smaller companies that just don’t have access to the sort of resources that Apple and Google do. In short, you can’t draw on the massive set of real voice commands that the big guys can. “When there’s a single developer, you never have enough examples to get good,” says Alexandre Lebrun.


That’s why he started Wit.ai, a service that helps developers pool their voice samples together to power a speech and natural language recognition system that Lebrun hopes will soon rival the depth and breadth of the tools available to the likes of Apple and Google. In the years to come, this could become an important thing, as developers build the next wave of technologies that require speech interfaces, such as internet-connected appliances and wearable devices that don’t have screens.


Wit.ai is still new, but it has already attracted thousands developers to its beta service, and on Wednesday, the company announced that it had just raised $3 million in seed funding from venture capital firm Andreessen Horowitz.


The Elephant in the Speech Rec Room


The company was born out of Lebrun’s frustrations with his experience at his previous company, VirtuOz, which developed speech recognition systems for companies like AT&T. The problem was that for each new system it built, the VirtuOz team had to start over—practically from scratch.


For each one, they had to gather a new set of voice samples to train the system. In many cases, there was overlap between the sets of commands different customers wanted to be able to recognize, but VirtuOz couldn’t reuse voice examples from one customer’s project to another.


“No matter how hard we tried, the elephant in the room was there – speech was never going to be perfect,” he wrote in a blog post today. “In fact, the end-user experience was sometimes catastrophic. Worse, because of the very high setup price to integrate voice into a system, no single vendor could truly address the needs of smaller companies or developers.”


Last year, Lebrun sold VirtuOz to Nuance, the speech recognition company that helps power Siri, and then he launched Wit.ai.


The Wit.AI team.

The Wit.ai team. Wit.ai



How It Works


Typically, a speech recognition developer begins by creating what’s called a “grammar”—a collection of words and phrases that you want the computer to be able to recognize. Then developers “train” the computer to recognize that grammar by feeding it as many different examples of people saying those words and phrases as possible. Since different users may phrase their commands differently, a grammar needs to be as robust as possible, recognizing as many different ways to express the same desire as possible.


What Wit.ai is essentially doing is making it possible for companies to share grammars and training data in the same way that software developers share code on sites like GitHub. And just as developers can create their own copies of code hosted on GitHub to modify as they like, they can copy grammars to modify for their own applications.


The business model is similar to GitHub as well. Just as GitHub is free for anyone who shares their code publicly, Wit.ai is free to anyone who shares their data. The actual voice recordings used to train the system won’t be shared, for privacy reasons and practicality. Companies that, for whatever reason, don’t want to share their grammars or data can pay a fee to use the service.


The Free Proposition


Wit.ai joins a growing number of companies and projects aimed at helping developers bring speech recognition to their applications. There are also open source projects such as Julius and CMU Sphinx, and other hosted service such as Google’s voice to text text. It interprets that speech, trying to determine what exactly the user wants to do.


By offering a free service, Lebrun hopes to attract a huge number of different grammars and training data, allowing it to offer a speech recognition capabilities on par with Apple and Google.


One big downside is that all the audio has to travel across the internet to the company’s servers. That means there could be issues with latency, availability and privacy. But Lebrun says that a “hybrid” version that works mostly on the client side and then exchanges information with the server is on the way.



Where to Watch Livestreams of Apple’s Big Thursday Event


Members of the media and Apple employees wait outside Apple headquarters before the 5s announcement last year. Tomorrow, we will be back in the same location preparing to give you up to the minute coverage.

Members of the media and Apple employees wait outside Apple headquarters before the 5s announcement last year. Tomorrow, we will be back in the same location preparing to give you up-to-the-minute coverage. Marcio Jose Sanchez / AP



If you’re getting a sense of deja vú, don’t be alarmed: Yes, Apple just had an event last month. But tomorrow we’re headed back to Cupertino, where the company is set to unveil new iPads, new iMacs, and OS X Yosemite.


For those who want to follow along in real time, Apple will once again be livestreaming the proceedings. Here are all the ways you can tune in.


When do the festivities begin?


The event kicks off at 10 am Pacific/1 pm. Eastern on Thursday, October 16. You can add a calendar event if you don’t want to forget about the livestream once you get wrapped up in work Thursday morning.


Is there a live stream of the event I can watch?


Why yes, there is! Just go to http://ift.tt/1t4I2sK, and Apple should be streaming video footage from its Cupertino headquarters. At the iPhone event last month, there were some technical streaming difficulties, so if you’re a die-hard Apple fan we recommend tuning into a live blog (like ours!) for a feed of the news as well.


What’s the catch?


Yeah, this is an Apple event. That means all the usual unnecessary hardware and software restrictions apply.Technically, the video stream will only work on Safari 5.1.10 or later, on a Mac running OS X 10.6.8 or higher. Apple doesn’t officially support streaming of its video to Windows PCs, Chromebooks, Android phones, or anything that doesn’t sport its “Designed by Apple in California” moniker.


If you’re on an iOS device, Apple’s event page says that you can watch the video in mobile Safari on devices running iOS 6 or later. You can also stream it on second- or third-gen Apple TVs with firmware version 6.2 or later.


But what if I don’t own an Apple device and want to watch?


If you’re up for some experimentation, you can try a user-agent-string spoofer for Firefox, Chrome, Internet Explorer, or Opera to give the appearance you’re using Safari. If you’re on an Android device, there are also iPhone and iPad user-agent spoofers available for its Chrome browser.


These are the same tech specs that were required at September’s iPhone event, so however you watched last time should work again this time around.


Also, we’ll be there


We will be live blogging and tweeting the apple event if you are unable to watch the live-stream.

We will be live blogging and live tweeting the apple event if you are unable to watch the live-stream. Ariel Zambelich / WIRED





We recommend following WIRED’s own live coverage. It’s friendly no matter what platform you’re on, and comes in several different varieties (because not everyone wants, or has the luxury, to keep track of the second by second happenings of the event).

For as-it-happens action, tune your browser to our liveblog. You’ll find the link at the top of WIRED’s homepage Thursday morning. Christina Bonnington (@redgirlsays) will be reporting live from the event in Cupertino. We’ll also share in-depth story coverage on Gadget Lab’s Twitter feed (@gadgetlab), WIRED’s main Twitter feed (@wired), and on the WIRED homepage throughout the day.


Woo! I’m going to the event!


Rad! Try to find a friend to carpool with. Bay Area traffic can be a nightmare and Apple isn’t walking distance from the nearest Caltrain station. Once you’re there, go give Christina a high-five and a coffee. She’ll be the reporter on her iPhone sitting in line waiting to get inside.



Big foray in the DNA pool: Retrieving small genomes from a mix of organisms

Scientists from the IZW led by Alex Greenwood publish in PLOS ONE a simple way to retrieve small genomes from a mix of various organisms.



Which viruses infect the elephant? Which type of bacteria causes severe lung disease in European brown hare? Molecular biological analyses of tissue samples always confront scientists with the same problem: how to retrieve the genome of a specific pathogen from a mixture of DNAs in a patient and its microbial cohabitants? "Very easily," says Alex Greenwood from the German Leibniz Institute for Zoo and Wildlife Research. "A short single-stranded base sequence is offered to the prepared DNA soup as a bait. Now, as it happens, not only does the complementary target sequence takes the bait, but by and by many other adjacent segments do so too." It does not even require a new method. The so-called "hybridisation capture method" offers everything that is needed. What is required is to pay attention during the subsequent data analysis.


Greenwood's doctoral student Kyriakos Tsangaras discovered the additional value of hybridisation capture by chance. This technology is based on tiny magnetic beads with short baitsequences of a few base pairs (oligonucleotides, or oligos in short) attached to them. Once these prepared beads are added to a sample mix of single-stranded DNA fragments, only the target complementary sequences bind to the oligos and short double-stranded DNA fragments are created. The beads are removed from the sample with the help of a magnet and the loose fragments are rinsed off. Then, the short double-strands are eluted from the magnetic beads and sequenced.


On the day of discovery, Tsangaras only wanted to compare a particular sequence of DNA enclosed in the mitochondria of different southeast Asian rodents. He therefore used a sequence of about a thousand base pairs to capture the relevant DNA. "Yes, we have the sequence," he then told Greenwood. "But we also have much more!"


Analysis of the sequences and comparison with reference data demonstrated that the complete mitochondrial genome of the rodents had been retrieved from the "DNA pool." This does not make any sense at all, was Greenwoods first thought. However, control experiments led to the same intriguing result. Greenwood asked Tom Gilbert from the Center of GeoGenetics in Copenhagen to help analyse this phenomenon. After considering several hypotheses, they returned to the most obvious explanation- there must have been a chain reaction.


"Figuratively speaking the targeted sequence took the bait first -- the complementary oligonucleotide sequence bound to the bait at the magnetic bead. Then a second sequence attached itself to the tail of the first, and the tail of the second then was 'bitten' by a third one and so on." Before being processed, the sample contained an intact double helix which then existed in fragments of various lengths. Because single-stranded DNA has the ability to spontaneously bind to any suitable complementary strand which it encounters, the following happened: After the complementary fragment from strand A bound to the bait, the flanking counterpart from strand B bound to its overhanging end. Then followed another fragment from A again, then from B, then A… and so forth.


It is quite simple and compatible with textbook knowledge. Why did not anyone else observe this before? "If someone is only looking for a thousand base pairs he usually only checks whether he found them. Everything that occurs in addition is discounted as junk," says Greenwood. The authors call this "by-catch" process, in which a single DNA fragment catches overlapping flanking sequences, "CapFlank." It is therefore possible to yield plenty of genetic information with just a tiny fragment. In fact, entire mitochondrial genomes and almost the entire genome sequence of a bacterium were obtained when specifically tested for the efficiency of the by-catch principle.


CapFlank opens doors to completely new possibilities, e.g. in the genetic analysis of pathogens. "We can use short preserved gene sequences to yield the genome (or at least large sections of it) from pathogenic variants of influenza viruses for example, or from completely new pathogens," explains Greenwood. As their next task, his team wants to retrieve simple and well characterised DNA viruses such as the elephant herpes virus.


The CapFlank-method is even suited for heavily fragmented "ancient DNA" extracted from animal bones from museum collections. These bones are often strongly contaminated with microbial or human DNA. Greenwood's colleagues successfully applied CapFlank to samples from koalas kept in museums. CapFlank is at its most efficient though with fresh DNA. From the intestinal bacterium Escherichia coli contained in a human urine sample the scientists retrieved 90 per cent of the genome in one go.




Story Source:


The above story is based on materials provided by Forschungsverbund Berlin e.V. (FVB) . Note: Materials may be edited for content and length.



Fermented milk made by lactococcus lactis H61 improves skin of healthy young women

There has been much interest in the potential for using probiotic bacteria for treating skin diseases and other disorders. Japanese researchers have now found that milk that has been fermented using a probiotic dairy starter can also benefit the skin of young healthy women, reports the Journal of Dairy Science®.



Probiotics have been defined by the Food and Agriculture Organization-World Health Organization as "live microorganisms which, when administered in adequate amounts, confer a health benefit to the host."


"Although many reports have addressed the effect of lactic acid bacteria on skin properties in subjects with skin diseases, such as atopic dermatitis, few studies have involved healthy humans," explains lead investigator Hiromi Kimoto-Nira, PhD, of the National Agriculture and Food Research Organization (NARO) Institute of Livestock and Grassland Science (NILGS), Tsukuba, Japan.


The investigators conducted a randomized double-blind trial to evaluate the effects of fermented milk produced using Lactococcus lactis strain H61 as a starter bacterium (H61-fermented milk) on the general health and various skin properties of young women. Strain H61 has been widely used over the last 50 years in Japan to produce fermented dairy products.


Twenty-three healthy young women 19- 21 years of age received either H61-fermented milk or conventional yogurt for four weeks. Blood samples were taken before and at the end of the four-week period, and skin hydration (inner forearms and cheek) and melanin content, elasticity, and sebum content (cheek only) were measured.


After four weeks, skin hydration was higher in both groups. Sebum content in the cheek rose significantly in the H61-fermented milk group, but not in the conventional yogurt group. Other skin parameters did not differ in either group, although differences exist for season and skin index.


"Season-associated effects are an important factor in skin condition," says Kimoto-Nira. "Skin disorders such as psoriasis and senile xerosis tend to exacerbate in winter. Melanin provides varying degrees of brown coloration at the skin surface, and melanin content is affected by internal and external factors, such as age, race, and sunlight exposure."


Blood count and serum biochemical parameters remained similar and were within normal ranges. The change in oxidative status was the same regardless of yogurt or fermented milk consumption.


"Our study enhances the value of strain H61 as an effective probiotic dairy starter," concludes Kimoto-Nira.




Story Source:


The above story is based on materials provided by Elsevier . Note: Materials may be edited for content and length.



How Crowdsourcing Will Help Startups Build Their Own Versions of Siri


Wit.ai currently offers a demo of their product online.

Wit.ai currently offers a demo of their product online. Wit.ai



Speech recognition is hard, even for the world’s largest tech companies. Apple and Google draw on massive collections of recordings of real speech patterns to help tune the voice recognition algorithms that power Siri and Google Now. And even though those tools are impressive, they still spend an awful of time mangling your voice commands.


Building speech-powered applications is even harder for smaller companies that just don’t have access to the sort of resources that Apple and Google do. In short, you can’t draw on the massive set of real voice commands that the big guys can. “When there’s a single developer, you never have enough examples to get good,” says Alexandre Lebrun.


That’s why he started Wit.ai, a service that helps developers pool their voice samples together to power a speech and natural language recognition system that Lebrun hopes will soon rival the depth and breadth of the tools available to the likes of Apple and Google. In the years to come, this could become an important thing, as developers build the next wave of technologies that require speech interfaces, such as internet-connected appliances and wearable devices that don’t have screens.


Wit.ai is still new, but it has already attracted thousands developers to its beta service, and on Wednesday, the company announced that it had just raised $3 million in seed funding from venture capital firm Andreessen Horowitz.


The Elephant in the Speech Rec Room


The company was born out of Lebrun’s frustrations with his experience at his previous company, VirtuOz, which developed speech recognition systems for companies like AT&T. The problem was that for each new system it built, the VirtuOz team had to start over—practically from scratch.


For each one, they had to gather a new set of voice samples to train the system. In many cases, there was overlap between the sets of commands different customers wanted to be able to recognize, but VirtuOz couldn’t reuse voice examples from one customer’s project to another.


“No matter how hard we tried, the elephant in the room was there – speech was never going to be perfect,” he wrote in a blog post today. “In fact, the end-user experience was sometimes catastrophic. Worse, because of the very high setup price to integrate voice into a system, no single vendor could truly address the needs of smaller companies or developers.”


Last year, Lebrun sold VirtuOz to Nuance, the speech recognition company that helps power Siri, and then he launched Wit.ai.


The Wit.AI team.

The Wit.ai team. Wit.ai



How It Works


Typically, a speech recognition developer begins by creating what’s called a “grammar”—a collection of words and phrases that you want the computer to be able to recognize. Then developers “train” the computer to recognize that grammar by feeding it as many different examples of people saying those words and phrases as possible. Since different users may phrase their commands differently, a grammar needs to be as robust as possible, recognizing as many different ways to express the same desire as possible.


What Wit.ai is essentially doing is making it possible for companies to share grammars and training data in the same way that software developers share code on sites like GitHub. And just as developers can create their own copies of code hosted on GitHub to modify as they like, they can copy grammars to modify for their own applications.


The business model is similar to GitHub as well. Just as GitHub is free for anyone who shares their code publicly, Wit.ai is free to anyone who shares their data. The actual voice recordings used to train the system won’t be shared, for privacy reasons and practicality. Companies that, for whatever reason, don’t want to share their grammars or data can pay a fee to use the service.


The Free Proposition


Wit.ai joins a growing number of companies and projects aimed at helping developers bring speech recognition to their applications. There are also open source projects such as Julius and CMU Sphinx, and other hosted service such as Google’s voice to text text. It interprets that speech, trying to determine what exactly the user wants to do.


By offering a free service, Lebrun hopes to attract a huge number of different grammars and training data, allowing it to offer a speech recognition capabilities on par with Apple and Google.


One big downside is that all the audio has to travel across the internet to the company’s servers. That means there could be issues with latency, availability and privacy. But Lebrun says that a “hybrid” version that works mostly on the client side and then exchanges information with the server is on the way.



Vaccines for Deadly Diseases Must Work on Both Animals and Humans


This Oct. 13, 2014, photo released via Twitter by the City of Dallas Public Information Managing Director Sana Syed shows Bentley in Dallas, the one-year-old King Charles Spaniel belonging to Nina Pham, the nurse who contracted Ebola. Bentley has been taken from Pham's Dallas apartment and will be cared for at an undisclosed location.

This Oct. 13, 2014, photo released via Twitter by the City of Dallas Public Information Managing Director Sana Syed shows Bentley in Dallas, the one-year-old King Charles Spaniel belonging to Nina Pham, the nurse who contracted Ebola. Sana Syed / PIO City of Dallas / AP



Editors’ note: This is the second part of series on how to tackle infectious disease outbreaks from doctors at the Mayo Clinic.


Bentley, the dog belonging to the first person to have caught Ebola within the United States, is in quarantine at a naval base. Last week, Spain killed a dog that may have been exposed to Ebola. Though this was undertaken in an abundance of caution, it is a stark reminder of the crucial role animals can and do play in the spread of disease.


Scientists know that around 60 percent of human pathogens have an animal origin. But, until recently, they knew very little about which sorts of animals from which many of these diseases originated. Now, there’s a strong sense that HIV came from chimps, SARS came from bats, and MERS came from camels. Fruit bats are considered the most likely host of the deadly Ebola virus.


Though wild animals may be the first place a disease appears, they can transfer them to domestic animals that live among us. As Professor Matthew Baylis of the University of Liverpool, said: “Domestic animals act like reservoirs for a range of diseases, many of which originally came from wild animals.”


Dr. Craig Rowles stands with hogs in one of his Carroll, Iowa, hog buildings on July 9, 2009. The farmer and longtime veterinarian did all he could to prevent porcine epidemic diarrhea from spreading to his farm, but despite his best efforts the deadly diarrhea attacked in November 2013, killing 13,000 animals in a matter of weeks. PED, a virus never before seen in the U.S. killed millions of pigs in less than a year, and with little known about how it spreads or how to stop it, it’s threatening pork production and pushing up prices by 10 percent or more.

Dr. Craig Rowles stands with hogs in one of his Carroll, Iowa, hog buildings on July 9, 2009. The farmer and longtime veterinarian did all he could to prevent porcine epidemic diarrhea from spreading to his farm, but despite his best efforts the deadly diarrhea attacked in November 2013, killing 13,000 animals in a matter of weeks. PED, a virus never before seen in the U.S. killed millions of pigs in less than a year, and with little known about how it spreads or how to stop it, it’s threatening pork production and pushing up prices by 10 percent or more. Charlie Neibergall / AP File



To make the world more safe and secure from infectious disease, we must be vigilant about outbreaks in animals, too.


The micro economic impact of the link between animals and human disease is profound. In many countries, a single cow (providing a family with either sustenance or income) means the difference between life and death.


The macro economic impact is equally striking: Consumers are paying nearly 13 percent more for pork at the supermarket than they were this time last year—partially because of a deadly pig virus, which has killed millions of piglets over the past 12 months. The disease threatens our food supply, as well as farmers’ livelihoods; indeed, many hog producers are worried about how to keep their farms immune from a disease that has no proven cure.


A bat disease, which has killed an estimated 6 million bats in the eastern United States and Canada since 2007, is another case in point. The diminishing number of bats is a problem, because bats eat mosquitoes. If we have fewer bats, there’s a real possibility that we’ll have more mosquitoes carrying their own set of diseases.


In this Oct. 2008 photo provided by the New York Department of Environmental Conservation is a little brown bat with fungus on its nose in New York. Michigan and Wisconsin wildlife officials said Thursday, April 10, 2104 that tests have confirmed the presence of the fungus that causes white-nose syndrome, which has killed millions of bats in the U.S. and Canada. The disease has now been confirmed in 25 states following the April announcements in Michigan and Wisconsin.

In this Oct. 2008 photo provided by the New York Department of Environmental Conservation is a little brown bat with fungus on its nose in New York. Michigan and Wisconsin wildlife officials said Thursday, April 10, 2104 that tests have confirmed the presence of the fungus that causes white-nose syndrome, which has killed millions of bats in the U.S. and Canada. The disease has now been confirmed in 25 states following the April announcements in Michigan and Wisconsin. Ryan von Linden / AP



As should be clear by now, stopping infectious disease with a rapid and effective response to prevent a true global epidemic should be a major priority on just about every continent. The key take-away from the connection between animals and disease is that we must have vaccines for diseases that are both animal and human targeted.


Beyond that, we must also develop and implement technology that is adaptable, fast, dynamic and universally applicable.


Second, we need to continue to gain more understanding about the organisms that cause disease and how they mutate, as well as spread. And this should be done regardless of whether there is a commercial value.


Third, we need to have production capabilities—across the vaccine development continuum—that are ready to roll to test proof of concept at a moment’s notice.


Fourth, we must continue to find ways to pull the best and brightest together; it will take a multitude of different scientific disciplines to tackle these diseases of global importance.


To put it simply—and bluntly—we must do everything in our power to keep the dogs of global disease at bay.


We must do this figuratively—and literally.


About the Authors:



Franklyn G. Prendergast, M.D., Ph.D, is a member of IDRI’s Board of Directors. He is also the

Edmond and Marion Guggenheim Professor of Biochemistry and Molecular Biology and Professor of Molecular Pharmacology and Experimental Therapeutics at the Mayo Medical School.


Steven G. Reed, Ph.D., is the Founder, President & Chief Scientific Officer at IDRI. His research interests have focused on the immunology of intracellular infections and the development of vaccines and diagnostics for infectious diseases. He led the team that, together with GSK, developed the first defined tuberculosis vaccine to advance to clinical trials, as well as a more recent second generation TB vaccine candidate and the first defined vaccines for leishmaniasis, as well as the K39-based diagnostic tests currently licensed for leishmaniasis.


Darrick Carter, Ph.D, is the Vice President of Adjuvant Technology at IDRI. His work centers on new immunomodulatory agents and formulations, as well as the process development necessary to take vaccines and therapeutic candidates from the lab to the clinic.



A Trippy Mind-Reading Goo That Reacts to Your Emotions


solaris-gif

Solaris turns the abstract thought of its viewer into a visual representation of their brainwaves. Photo: Dmitry Morozov



Russian artist Dmitry Morozov turns neural activity into art. He’s used brainwaves to control robotic musical instruments and harnessed psychic powers to stage performance art. His latest creation, called Solaris , works like a mood ring, moves like a lava lamp, and looks like The Matrix while making its observer feel like a Delphic Oracle.


Morozov outfits observers with a $499 electroencephalography headset and places them in front of a curvy, chrome tank filled with a glowing, UV-sensitive liquid. He instructs viewers to communicate with the inert object, a seemingly bizarre request, but the headset picks up the resulting brainwaves and activates a powerful magnet hidden under the placid pool’s surface. Magnetic pulses linked to the viewer’s brainwave then stimulate an inky ferrofluid. The splotch of black ooze reacts in turn, roiling in response to stressful thoughts and smoothing as the observer calms down.


Over time, some viewers begin to form a dialog with the murky blemish, controlling its behavior with neural activity. Experts can will the beating blot around the surface, telepathically force it to submerge and reemerge, and otherwise engage it through silent meditation.




“Since I was programming all the algorithms and spent hours and hours with it I got really deep visual to brain to EEG feedback,” says Morozov. “I could move and change the image any way I wanted just by changing my cognitive activity, mood, and concentration.”


Sci-fi Inspiration


The project is called Solaris, after a sci-fi novel of the same name centered on a crew of astronauts orbiting a planet-sized organism that lives under an globe-circling ocean. The astronauts attempt to probe the planet, but their crude techniques antagonize the unknowable creature who retaliates by subjecting the crew to horrific psychic experiences, like reliving the suicide of a long-lost lover.


The story is a deeply philosophical exploration of the limits of human communication. Morozov, along with collaborators Julia Borovaya and Eduard Rakhmanov, wanted to explore this intellectual abstraction in a more intimate, interactive, and benign experience. “We decided to combine this magical idea of liquid ‘screen-ocean’ and machine that can read ‘minds.’” The result it a “mirror” that turns electrical pulses from the viewers frontal lobe into a visual fingerprint of sorts.


Each viewer reacts differently, but most are eventually able to form a bond with the blotch. Morozov has noted that various moods and personality types tend to manifest similarly across subjects. “Different emotions and activities have similar patterns in similar states, of course, it’s a bit different by different participant.”