Google Just Fixed One of Wi-Fi’s Biggest Annoyances


Stop me if this sounds familiar (if it doesn’t, you’re not paying close enough attention). You walk into a coffee shop, where your phone hungrily gloms onto the open Wi-Fi network—probably called something like Netgear, or AT&TWiFi—and promptly stops working. The shop forgot to update the router’s firmware, or just bought a budget hardware in 1997 thinking, “internet’s internet!” and never upgraded again. Either way, your phone just invisibly sabotaged its own connectivity. You’re standing six feet from where you had flawless internet, and suddenly the information superhighway becomes one giant roadblock. It’s a small, but consistent (and infuriating) problem, and Google has finally taken steps to solve it. With the new Android 5.1 update, which began rolling out yesterday in the U.S., your phone will remember which networks you attempt to connect with have crappy Wi-Fi, and save you from ever hopping on their bandwidth again.


A few years ago, your phone’s Wi-Fi-hopping strategy made sense. Your 3G network was probably painfully slow, perpetually overloaded, and generally battery-crushing. You also probably had an adorably small data cap, measured in megabytes. I spent years bouncing from store to store, lingering outside of whatever restaurant had an open network just long enough to download some new music. AT&T even made an app for Android phones that would tell you where you could find WiFi—and it was super popular! The networks offered by cable companies, fast-food joints, and one candy store whose name I can’t recall, were a huge asset.


Today, though, LTE speeds are fast and efficient enough that you’re rarely better off on the sponsored connection at the train station. There’s almost never a good reason to keep stalling out on the same dud Wi-Fi networks. Android 5.1 spares you that agony, while still defaulting to reliable Wi-Fi networks when they’re available. Magic.


Most of the Android 5.1 update fixes small annoyances like this. You can connect to Bluetooth devices without plowing through three settings menus; you can access the Quick Settings from the lock screen. You can customize those settings, so you have faster access to only the things you use most (hello, Hotspot Mode; goodbye, pointless auto-rotate toggle). There’s extra security, so that even if someone steals and resets your phone, they won’t be able to use it. You can make better-sounding phone calls, if you’re on one of the vanishingly small set of devices that supports HD Voice.


It’s all great; the only catch is that you probably won’t get it anytime soon. While the Nexus 6 has already started seeing the update stateside, Google’s latest and greatest software still takes too long to percolate around the ecosystem for everyone else. But at least when you look down and suddenly realize you’re connected to a painfully sluggish Starbucks network, you’ll know the days of your anguish are numbered.



New Samsung Tech Will Give Cheap Phones Tons of Storage

Cheap phones like the Moto E have tradeoffs compared to pricey ones. With Samsung's latest drive, storage capacity won't be one of them. Cheap phones like the Moto E have tradeoffs compared to pricey ones. With Samsung's latest drive, storage capacity won't be one of them. Motorola



They may not be as sassy or chamfered or as pixel-packed as the iPhones and Samsung Galaxys of the world, but there are some excellent, affordable smartphones out there these days.


Take the Moto G, which packs a quad-core chip, an ample 5-inch display, a microSD slot, and customizable shells for less than $200—without a contract. Or the step-down Moto E, which squeezes a decent phone into a $120 package—again, no contract required. The same goes for the $80 Lumia 635, and a whole fleet of capable sub-$300 off-contract handsets. If you don’t care about the handheld horserace and just want a solid, budget phone that works, options abound.


But there are certainly reasons you pay less for these discount devices. They may not run the latest version of Android. They don’t have the greatest cameras or screen resolutions. And they generally don’t offer a lot of onboard storage, which can be more important than you may think. The $180 Moto G, for example, has a scant 8GB onboard. The cheaper Moto E skimps even more, with just 4GB. And although you can pop a microSD card into both of them, the camera and video specs are capped (in part) to stretch their limited storage a bit further.


The reason for this, as you’ve likely assumed, is that storage is expensive, but relief may be on the way. Samsung just announced a new embedded flash storage drive that aims to bring high-end storage capabilities to lower-end phones. The company claims its 128GB NAND-based eMMC 5.0 integrated storage will help future generations of cheap phones and tablets meet our ever-growing storage needs without driving up the prices into luxury territory.


So what keeps this new high-capacity storage drive cheap? Surprisingly enough, new developments on the high end. Samsung is also trotting out a new Universal Flash Storage (UFS) 2.0 drive, intended for high-end phones, tablets, and possibly even laptops, that the company says will kick mobile read/write speeds into overdrive. According to Samsung, the new UFS 2.0 drives offer significant advantages over eMMC, including the ability to read and write data at the same time, and offering better multitasking performance. Meanwhile, Samsung is also producing mobile flash drives based on a new eMMC 5.1 standard that was approved earlier this year, which means that the company will wind up with three tiers of mobile storage: A high-performance UFS 2.0 drive, a mid-range eMMC 5.1 drive, and a low-end eMMC 5.0 drive. This means eMMC 5.0 is a generation behind in the eMMC game, and even further behind the higher-end mobile market’s UFS 2.0 future. It also means it will be available for relatively much cheaper.


Cheap or no, more storage is certainly better than less storage, and odds are, you won’t notice the performance difference between these inexpensive high-capacity drives and the current market. It may be slower than eMMC 5.1 and UFS 2.0, but it’s faster than a microSD card for sure. Plus, the kind of cheap phones for which this new storage is intended likely won’t be able to do things like shoot high-bitrate, higher-resolution (4K) video anytime soon, so you may not need the lightning-fast performance of UFS 2.0 anyway.


The big picture is that the new wave of cheap phones will provide even more bang for the buck. While the next generation of cheap-but-good handsets may not have the tack-sharp screens, high-end cameras, and other accoutrements of the fancy phone set, at least they’ll have plenty of space to stash all your digital stuff.



Game|Life Podcast: Valve Goes VR, Nintendo Goes Mobile



Game|Life Podcast: Valve Goes VR, Nintendo Goes Mobile



steam vr headset Valve





The Inspiring Sled Dogs and Stunning Views at Iditarod


Katie Orlinsky is willing to suffer to make her photos. For the past month she’s been covering sled dog races in Canada and Alaska in temperatures as low as negative 50 degrees.


“It can be totally nuts,” she says.





Most recently, Orlinsky was in Alaska for the 1,000-mile Iditarod race on assignment for The New York Times . A musher named Dallas Seavey won the race Wednesday morning, and his dad Mitch came in second. (Either Dallas or his dad have won the race for the past four years.) Approximately half the field is still out on the course, and the race is famously not over until the last team crosses the finish line.





Before the Iditarod, Orlinsky covered the Yukon Quest, another 1,000-mile race from Whitehorse, Yukon to Fairbanks, Alaska, for National Geographic News. During both races she’s posted outtakes to her popular Instagram account to give people a sense of what the day-to-day experience of covering sled dog racing is like. (Though the races are over, Orlinsky will post more photos in the coming days as she sorts through her takes.)





Orlinsky, 31, says she’s willing to brave the extreme cold because races like the Iditarod take her to some of the most remote, but stunning places on the planet.




The Yukon River, most of the #Iditarod race so far has been along this beautiful yet imposing body of water.


A photo posted by Katie Orlinsky (@katieorlinsky) on Mar 12, 2015 at 9:14pm PDT




She’s also been able to meet and build relationships with local people in these far-flung northern communities.





She’s also fascinated with the intense relationship between the dogs and mushers. “These dogs won’t just go 1,000 miles for anyone,” she says. “They have to really love the person they are running for.”





To keep her camera gear functioning in such extreme temperatures, Orlinsky has come up with a couple hacks. She has a gigantic batch of batteries that she rotates in between her two Canon 5D Mark IIIs. She stores all the batteries in her parka pocket and puts hand warmers in the pocket for an extra dose of heat (the temperatures are so cold that her iPhone won’t work at all if she pulls it out).


When she goes inside a structure with heat, she has to be careful because the cameras can warm up too fast and moisture will build inside the electronics. To avoid this, she’ll sometimes keep her cameras in a plastic bag, and she’s extra careful about slowly warming up a camera up.





Sometimes it’s just too much. When temperatures drop to extremes like negative 50, there’s no way to know whether her cameras will function.


“Negative 20 and above is okay, but once you get negative 20, all bets are off,” she says.




Snow blanket on sled dog #Iditarod #Unalakleet #sleddogs


A photo posted by Katie Orlinsky (@katieorlinsky) on Mar 19, 2015 at 6:02pm PDT




Eventually, Orlinsky hopes to turn all her sled dog work into a book. She’s only been shooting the events for two years, but last summer, between winter races, she visited several racers at their homes in Alaska to see how they breed and train dogs in the off-season. There’s still one more race in Alaska that she hopes to cover next month (so look out for more Instagrams). And this summer she’s planning to go back to Alaska to keep documenting the racers. If all goes according to plan, she’ll also be back in Alaska next winter during the Iditarod, trying to stay warm while adding to her project.


“I’m still just scratching the surface,” she says.




And goodnight :) #Alaska


A photo posted by Katie Orlinsky (@katieorlinsky) on Mar 9, 2015 at 1:18am PDT





Things to Do in Miami That Aren’t Mountains of Drugs



Things to Do in Miami That Aren’t Mountains of Drugs



Wander through the wynwood walls, where new street artists cover buildings with brightly colored graffiti. Wander through the wynwood walls, where new street artists cover buildings with brightly colored graffiti. Alex Webb





WIRED’s First Investor Speaks at the Very First TED


Imagine a version of Skype or Google Hangouts where instead of a flat screen, you chatted with a 3-D screen molded into the shape like your friend’s face. When your friends move their heads, the screens would too.


That was one of the first “ideas worth spreading” at the very first TED event Monterey, California, in 1984. It was pitched by Nicholas Negroponte, who would go on to found the MIT Media Lab and become WIRED’s first investor and an early contributor to the magazine.


WIRED has been at TED all week, grappling with the annual event’s legacy. On one hand, it liberates ideas from stuffy academic journals and spreads them to the YouTube viewing masses. On the other, it can be a self-congratulation fest for people with more money than sense.


Negroponte predicted that touchscreens would become an important interface for computers.


The event is a paradox, both informative and infuriating, energizing and exhausting. And that paradox was there right from the beginning, as you can see from the video above. This was before TED’s famous 18-minute format, but it contains all the hallmarks of a modern TED talk, complete with high-tech demos and an inspiring story.


Today Negroponte is probably best known for founding the One Laptop Per Child project and railing against network neutrality, but in early 1980s he was the director of MIT’s Architecture Machine Group, where he led researchers experimenting in interactive television and new computer interfaces.


Besides his face-mold video chat idea, Negroponte predicted that touchscreens would become an important interface for computers, and demoed some early prototypes of touchscreen devices. He also predicted that not only would we read books on screens, but that books would become more interactive, giving readers the option of drilling into particular areas and skimming over others.


PCs Weren’t Inevitable


Keep in mind that in 1984, the Macintosh had only just been released and, though the Commodore 64 was selling well, it wasn’t at all clear that computers would become an everyday part of most people’s lives, let alone the internet and mobile computing.


Then he talks about a kid in Senegal who taught himself how to program even though no one thought he could read. It turned out that he didn’t think of reading computer manuals as “reading,” because it was practical. Reading, to him, meant reading the seemingly pointless literature that teachers handed out. Negroponte suggested that giving kids computers and letting them see the immediate results of programming might be a better way to engage them with education—a radical concept that’s still controversial today. This anecdote is pure TED: inspiring but naive, arrogantly technocratic yet insightful.


The first TED was anything but a success, and the next event wasn’t held until 1990. It took many more years for TED to become cultural force, spawning a publishing wing and numerous spin-off events, as well as inspiring more populist alternatives like BIL, not to mention savage parodies. But you can see everything that’s wrong and right with TED in this early talk.



The Gender Problem in Venture Capital Is Really, Really Bad

Ellen Pao. Ellen Pao. Lauren Simkin Berke



The gender problem in venture capital is really, really bad. Industry-wide, 77 to 79 percent of VC firms have never had a woman represent them on the board of one of their portfolio companies. And more than three-quarters don’t have any women working as venture capitalists at all.


And check this out: Those numbers come from an expert witness called by a venture capital firm to defend it against charges of gender discrimination.


Harvard Business School Professor Paul Gompers was called to the stand yesterday by Kleiner Perkins Caufield & Byers, the storied venture capital firm defending itself against a gender discrimination suit brought by a former partner, Ellen Pao. In calling Gompers, Kleiner was seeking to cast itself in a positive light compared to the dismal diversity record of the industry as a whole.


Gompers said he started with a data set from 2003 and updated his research in 2013 to look at 30,000 venture capitalists in the US. According to the results of his study, the data from 2003 to 2013 was almost the same. Among his findings, he testified, Kleiner Perkins was the only VC firm with ten women on portfolio companies’ boards in 2013, more than any other.


Gompers said he found that women fared better in firms that are larger, older, and have other females working as venture capitalists. He said that Kleiner Perkins “had all of those boxes checked.”


Later, under questioning from Alan Exelrod, Ellen Pao’s attorney, about the statistical significance of this finding, he clarified: “My study says, on average, women perform as well as the men in these types of firms.”


In 2014, Gompers wrote a report analyzing gender effects in venture capital. In that study, he wrote that, if a woman did not have significant prior experience as a tech exec, “better entrepreneurs may be less likely to take money from a female venture capitalist.”


Kleiner attorney Lynne Hermle also used Gompers’ time on the stand to present a chart comparing Ellen Pao’s base salary from 2004 to 2011 to the base salaries of three male junior partners at the firm. The chart showed Pao had a higher base salary in all the years, though by 2011, when she was making about $400,000, the gap with the others had closed significantly.


$900 Per Hour


Pao wasn’t the only one whose pay was put up for scrutiny. Gompers testified that his standard rate is $900 per hour. Under questioning from Exelrod, he said his bill to Kleiner ran to more than $90,000.


“I am not being paid to reach one conclusion or another,” Gompers said.


Gompers said he had a professional interest in determining what factors allowed successful female VCs to thrive. Having women at a VC firm would be beneficial to that firm, he testified. “Venture capitalism is all about collaboration with partners, and you hope there’s a diverse set of experiences at a firm so that people can question new ventures,” he said.


Gompers testified under questioning from Exelrod that he authored twenty-seven articles in refereed journals, spoke at high-profile conferences, and had written a book about VCs. He also said he sits on the board of the US Army’s venture capital fund. “It’s fair to say you are a promoter and a cheerleader for the venture capital industry, isn’t it?” Exelrod challenged him.


“Not a cheerleader,” Gompers answered. “A scholar.”



An Afghan Museum That Mimics Ancient Buddhist Monasteries


Regardless of style or purpose, we tend to accept that buildings are just that: built, on top of the earth’s surface, to be bigger and taller than we are. For centuries, the people of the Bamiyan Valley of Central Afghanistan has flipped that script. The Valley sits between the mountains of Hindu Kush, and since the 1st century, the surrounding cliffs and tributaries have been home to ancient monasteries and chapels, built out of caves and foothills. Twenty centuries later, that’s not set to change: the Afghan government and UNESCO recently announced that the winning design for a competition to build a local cultural center would be a series of brick-lined passageways and rooms built directly into the land.


The winning team, a small firm from Argentina called M2R, had a lot to grapple with. In the 6th century, the people in the Bamiyan Valley of Central Afghanistan carved two Buddha sculptures into the cliffs to mark the most western point of Buddhism’s expansion on the ancient Silk Road. They were massive—one stood almost 200 feet tall. Buddhists would meditate near them in the sandstone caves; monks visited from China to pray. The statues were integral not just to Buddhism, but to Bamiyan culture, which in more recent years had become more Muslim than anything. Locals even had a homespun fable about the statues, that they were ill-fated, star-crossed lovers from different religions, and that’s why they turned to stone.


Tea House M2R

Then, thousands of years later in 2001, the Taliban destroyed the idols with dynamite, in an attack against pre-Islamic idolatry. In 2003, after the Taliban had fallen from power in Afghanistan, the United Nations Educational, Scientific, and Cultural Organization (UNESCO) declared the site a historic landmark, an action that immediately made locals, worshippers, and archaeologists ask: Will the Buddhas be rebuilt? The debate lasted for years. German archaeologists were keen to rebuild the statues, but UNESCO operates under the Venice Charter, which says that any monumental reconstruction has to be done with the original materials.


Designing Absence, Not a Monument


UNESCO eventually ruled not to. Instead, it staged an architectural competition for a new cultural center in the Bamiyan Valley that would both memorialize the destruction of the Buddhas, support archaeological work through storage, and allow for events. M2R’s winning proposal is called Descriptive Memory: The Eternal Presence of Absence. Judging from the renderings, it’ll be a peaceful piazza, like a contemporary version of the sanctuaries built into the foothills centuries ago. Excavating the landscape is part strategic, says project lead Nahuel Recabarren, because the soil there can store large amounts of heat, which helps to insulate the caves from the cold. It’s also deeply symbolic, because it mimics the work of the Buddhist monks who, ages and ages ago, built caves into the landscape for their sanctuaries.


M2R faced a design challenge similar to one put to the architects for the National September 11 Memorial Museum in New York City: when you’re honoring a tragic disaster so recent that the people visiting your site are the same people who bore witness to the tragedy, what’s appropriate? In this case, M2R have taken a counter intuitive approach to construction by harnessing the negative space left by the Buddhas. “We had to find an adequate way in which architecture could respond to the meaning and history of the place,” Recabarren says. “We thought that, given the breathtaking landscape and the deep cultural significance of the area, the Cultural Center should not impose itself over the site. Much of recent architecture has become obsessed with image and visibility, but not every building can be a monument.”



This Beer Was Brewed With Yeast That’s Been to Space


In the hallowed halls of a crowded bottle shop, it can almost seem as though there are more craft beers on the market than there are stars in the sky. Still, you may want to make an effort to seek out Ground Control, a brew that’s truly—no, literally—stellar.


Ninkasi Brewing Company, based in Eugene, OR, founded the Ninkasi Space Program (NSP) in 2014 in an effort to push the (outer) limits of brewing. The goal? To send yeast to space, recover it, and use it to brew some delicious beer.


This April 13th, Ninkasi will release the fruits of NSP’s labor. Ground Control is an Imperial Stout, brewed with Oregon hazelnuts, star anise, and cocoa nibs, plus Apollo, Bravo, and Comet hops. And, yes, space yeast.


Like any mission to space, the journey to the final frontier was not without its hiccups. Ninkasi’s first attempt, Mission One, launched from Black Rock desert in Nevada in July 2014, but after the payload hit the ground nine miles from its intended landing site, the yeast sat undiscovered for 27 days.


Notoriously fickle, yeast can only survive within a fairly narrow temperature range. As you might have imagined, 27 days in the Nevada desert fell outside those bounds. Mission One’s sacrifice was not in vain, though. NSP regrouped and re-launched in October 2014, this time with the help of UP Aerospace–a Denver-based company dealing in private spaceflight–in Truth or Consequences, New Mexico. The rocket, SpaceLoft-9, soared to 408,035ft (77.3 miles) above Earth with six vials of yeast onboard.


Four minutes of weightlessness in the exosphere later, the yeast coasted back to the ground, where it was recovered by the NSP team and transported back to Oregon, tested, and deemed viable for brewing.


And so, Ground Control was born.


A few minutes in space doesn’t technically change the yeast’s properties. Ground Control is not supercharged by solar radiation, and it won’t turn you into a Romulan*. But there are plenty of brewers who struggle to keep yeast alive here on the ground, so launching six vials into space, bringing them back unharmed and brewing a commercially available beer is quite the feat. And space beer, if nothing else, is quite the conversation starter.


Ground Control will be available in 22 oz limited edition bottles for just under $20 in Alaska, Alberta, Arizona, California, Colorado, Idaho, Nevada, Oregon, Washington, Vancouver, B.C., as well limited availability in New York, Washington, D.C. and at select events. Get it while you can because it’s a limited, one-time release—this batch is just 55 barrels, and when it’s gone, it’s gone. At least, until the next time Ninkasi decides to send a rocketful of yeast into space.


So cheers, fellow terrans, and may you taste the stars when you sample the world’s first space-aged brew.


*Probably.



If Google Glass Was a Joke, Then the Joke Is On You


I don’t feel the following needs any elaboration, but for the sake of making my point I will do so anyway. When a device or software is in it’s beta phase it means that it is near completion. Typically, beta is the latest version before the final version is released. The main reason why it is released is based upon the fact that the endeavors, or rather the efforts, that businesses make to create the next big thing are naturally based upon an hypothesis. Therefore the best way to find out if the hypothesis is anywhere near the reality is to simply test it on real life people in real life situations — a point that was eloquently written about by Josh Bradshaw from WorkTechWork in his piece “The Wearables’ Dirty Laundry.”


Personally, I feel that to actually pre-release a product or software kind of goes against our natural urge, simply because it takes a lot of courage to do so. Whatever you release puts your reputation on the line. If your hypothesis is completely off then what does that tell you(or even worse, others) about your level of insightfulness on the market as a whole. Going against that natural urge to protect what you have with the goal to do better should, in my opinion, be celebrated and embraced.


The above is easily projected onto Google Glass, released February 2013. The fact that they did the price-psychology shuffle on early-adopters by making them pay a $1500 for the device can only be judged as a great marketing strategy. It gave the product the exclusivity it probably needed to get the right attention it sought.


While the tech savvy among us are presented with more and more choices between devices, it is almost as if we are forgetting that we are still living in a time where wearable technology is still in an incubation period. Design limitations are a real problem even for the brightest people working with the most innovative companies. Some could argue that Google, Apple or any other tech company have their back against the wall on this issue.


Maybe it makes more sense to have a look at our level of expectation, so we can realize that we should applaud companies like Google, who stick their neck out, knowing that they will probably be slaughtered for it. I am sure the jewelry designers Ivy Ross and Tony Fadell, who are both in charge of Google’s smart eyewear division, will be extremely happy with all the valuable data they have gathered during the first Google Glass project. The sign of self-reflection expressed by Google X boss or rather Captain of Moonshots, Astro Teller, at SXSW, where he stated that “We allowed and sometimes even encouraged too much attention for the program,” isn’t admitting a failure in anyway. Specially when it’s put in the light of the amount of behavioral and user data they gathered.


Luckily for some Glass enthusiasts, there are reports coming out in the tech media that although the initial project was killed on the 15th of January, that a Google Glass 2 might be on the horizon. Apparently Google has been showing the Glass to “some” of their more important partners. Whether this was before or after they pulled the plug it is hard to say. In conjunction with Google’s press release, where new versions of the smart glasses were spoken about, I think it might take some time before the Google Glass 2 will be released. This is because I don’t see Google using the same strategy again.


Anyhow, I am sure even Apple has learned a thing or two about Google’s trial release and the fact that slowly more news is published that they have set a team on researching the augmented reality space themselves speaks clearly. In my opinion Google’s initial strategy was to innovative in the smartest way possible and it is unfortunate that everyone seems to be waiting in line to crush the company.


Maybe they should be complimented for their marketing strategy. But if you don’t understand that the best way forward for Google was to build the best possible smart glasses while being unafraid of failure then you probably don’t want to. This to me kind of feels like someone making a witted joke but it’s too witted for someone to understand it. If that’s the case, then maybe the joke is on you.


Mano ten Napel is the founder of the wearable startup Novealthy.



Ingenious Gardening Tools From Fiskars, the King of Cutting


Sometime in early 2013, a few of the engineers at Fiskars—maker of the world’s most ubiquitous orange scissors in addition to gardening and pruning gear—noticed a discrepancy. “We always do our testing [for heavy hand tools] based on a machine cutting a branch,” says Dan Cunningham, a senior engineer at the company. Yet, “we humans are not machines. So we thought, well, how far away are we?”


The industrial machines at Fiskars test loppers, shears, and pruners, by applying a fixed, steady amount of pressure on saplings until they snap. The human arm maneuvers differently than a couple pieces of metal. We have joints, bones, and varying levels of strength. Cunningham and the engineers at Fiskars wanted to figure out how to design their tools for the latter, which is how they ended up launching the PowerGear2 line of tools.


Old Tools, New Data


The answer to Cunningham’s question—how different is the human arm from a machine’s—lies in data, all of which is pretty new. Tools like loppers are practically primitive (“In the 1700s you can find pictures of people using them,” Cunningham says, but earlier versions surely existed in some form.) but it wasn’t until recently that the makers of said tools gained nuanced insight into what happens when people use them.


There’s a team at Georgia Tech Institute that studies this specific brand of ergonomics. The Arthritis Foundation has a stamp of approval they issue to products that meet standards decided upon by research conducted by that Georgia Tech team, so that’s where Fiskars took its human-versus-machine problem. All of Fiskars’ products have earned the approval stamp in the past, but Cunningham says with this line they “wanted to go even further, and really tweak it.”


The team at Georgia Tech started taking measurements, by asking over 100 test subjects to squeeze shearing tool handles outfitted with sensors that measure force. Subjects squeeze for five seconds, stop then squeeze again with arms held in a new positions. With all that, the researchers compiled what Cunningham calls “some complicated math and models of people over 50, under 50, male, female. It’s given us a model of how strong a person is as they’re moving.” In essence, the Fiskars team was doing what user-centric designers have been doing for years. This time, they have the benefit of sensor-powered, ultra-precise measurements.


Armed with those models, the Fiskars team reengineered the gear at the crux of all the heavy hand tools. “Typically a gear is two circles that spin around,” Cunningham says. “With circular gears, that’s saying the mechanical advantage is constant.” But what the Georgia Tech models showed—and what the Fiskars engineered knew through practice—is that when a person cuts a branch it’s easy at first, then much more difficult, and then easier again right before it snaps.


To deliver a mechanical boost in the middle, the engineers created a non-circular gear that adjusts the ratio of force applied during a snip. Cunningham compares it to the change that clicks in when you change bike gears, only it happens automatically here. The subtle design decision “allows us to tune the power that the lopper is providing,” Cunningham says. “That tuned curve feels very smooth all the way through.”



What We Know About Nintendo’s New NX Gaming Platform


Nintendo’s got a new game platform brewing. Here’s what we know so far.


Everyone’s abuzz with the news that Nintendo will make smartphone games, but in the midst of that announcement this week it dropped another bombshell: It’s close to announcing a new dedicated gaming platform, codenamed NX, which it will unveil in 2016.


NX, said Nintendo president Satoru Iwata when he announced it, should be taken as “proof that Nintendo maintains strong enthusiasm for the dedicated game system business.”


“I wanted to communicate that Nintendo will be progressing with videogame-dedicated devices with a passion,” Iwata said during a Q&A session following the presentation. “We wanted to make it clear that Nintendo will continue with that as our core business.”


Since then, Nintendo refuses to say even a little bit more about NX, other than the fact that it is built around “a brand-new concept.” From the company that brought us the Virtual Boy, this tells us absolutely nothing. So of course, in the absence of information, there’s been tons of speculation as to what it might be.


We actually do have more information, though, that can guide us in the right direction.


Nintendo’s handheld and home console development teams had always been separate, and created totally different types of game machine. The fact that the home and portable machines were such different pieces of technology had begun to prove very frustrating to Nintendo. Iwata said in a presentation to investors in March 2014 that while Nintendo wanted to port Wii games to the Nintendo 3DS, or bring Nintendo 3DS titles to the Wii U, it found that because the architectures were so different, it required “a huge amount of effort.”


But in 2013, Iwata unified the hardware divisions into a single team, saying in that 2014 presentation that “because of vast technological advances, it became possible to achieve a fair degree of architectural integration.” This meant, he said, that porting games across platforms would be much easier, and help solve Nintendo’s current problem of “game shortages.”


Then, he began to talk about Nintendo’s “next system.”


“It will become important for us to accurately take advantage of what we have done with the Wii U architecture,” Iwata said. “It of course does not mean that we are going to use exactly the same architecture as Wii U, but we are going to create a system that can absorb the Wii U architecture adequately. When this happens, home consoles and handheld devices will no longer be completely different, and they will become like brothers in a family of systems.”


Let’s unpack what Iwata was saying here. He’s specifically talking about the merging of home and portable systems, and creating something that can “absorb the Wii U architecture.” So he is specifically talking about a handheld device that shares a common development platform with Wii U. This doesn’t mean that you would put a Wii U disc into your handheld, or even that the downloadable versions of Wii U games would Just Work on it, but that it would be simple enough to port Wii U software to this handheld.


Note that he isn’t saying that the Nintendo 3DS architecture would also need to be absorbed into this new portable. He’s talking about moving forward with Wii U specifically, not 3DS. So this sounds very much like Nintendo would be moving away from 3DS and creating a handheld device that, if anything, would use content from Wii U, not 3DS.


In case anyone in the audience was thinking that what Iwata was discussing was the merging of home consoles and handhelds into one single successor device, he was quick to put the kibosh on that. “I am not sure if the form factor (the size and configuration of the hardware) will be integrated,” he said. “In contrast, the number of form factors might increase.”


“Currently, we can only provide two form factors because if we had three or four different architectures, we would face serious shortages of software on every platform,” he said. But if Nintendo had one unified platform like Apple’s iOS, Iwata said, it could actually create more than just two different game machines each cycle. “To cite a specific case, Apple is able to release smart devices with various form factors one after another because there is one way of programming adopted by all platforms.”


“Another example is Android. Though there are various models, Android does not face software shortages because there is one common way of programming on the Android platform that works with various models. The point is, Nintendo platforms should be like those two examples.”


In conclusion, he did not rule out the possibility of a future in which Nintendo does only make a single form factor that’s used for multiple purposes: “Whether we will ultimately need just one device will be determined by what consumers demand in the future, and that is not something we know at the moment.”


So, with all of this in mind, what can we say about NX?


It can be used as a portable game device—at least. This is clear as day. Whether you can also hook it up to a TV or not, it must be a handheld. said, it’s possible that NX could also output to your television. I saw gaming video producer Ryan O’Donnell encapsulate this perfectly following the announcement, noting that Wii U was “backwards”: The guts of the machine should be in a handheld device, and the dumb terminal part should hook up to the TV.


Wii U is part of Nintendo’s future, but 3DS may not be. 3DS is the dog that didn’t bark in this scenario. I pointed this out when I discussed Kirby and the Rainbow Curse for Wii U, but the Nintendo 3DS is actually not a very good touchscreen gaming device. Having a 3-D top screen and a 2-D touch screen doesn’t actually work, since it relegates the touchscreen to only secondary use scenarios, like extra virtual buttons in Legend of Zelda.


Nintendo may need to rip off the Band-Aid at some point and develop a portable machine that is not backward compatible with 3DS. NX could be it, and its backward compatibility (after a fashion) may only extend to Wii U. I don’t know where this leaves the future of glasses-free 3-D displays. Maybe NX has one screen that can swap between touch-sensitive and 3-D display modes. Maybe 2-D is the future.


NX is probably a suite of devices, not a single one. What we’ll probably see in 2016 when Nintendo takes the wraps off NX is the tip of the spear. Nintendo will need something that’s primarily portable to take the baton from 3DS. But even if the device we see doesn’t also work in the home, Nintendo will likely unveil other form factors that play similar (or identical) games but have different home-portable configurations. Maybe the hybrid device is real, but it’s not the first thing in stores. Maybe there’s a device that only works with the TV, but is significantly cheaper since it has no screen.


As for the rest, all we can do is speculate until 2016.



Google Builds a New Tablet for the Fight Against Ebola

publicpreview Credit: Médecins Sans Frontières.



Jay Achar was treating Ebola patients at a makeshift hospital in Sierra Leone, and he needed more time.


This was in September, near the height of the West African Ebola epidemic. Achar was part of a team that traveled to Sierra Leone under the aegis of a European organization called Médecins Sans Frontières, or Doctors Without Borders. In a city called Magburaka, MSF had erected a treatment center that kept patients carefully quarantined, and inside the facility’s high-risk zone, doctors like Archar wore the usual polythene “moon suits,” gloves, face masks, and goggles to protect themselves from infection.


With temperatures rising to about 90 degrees Fahrenheit, Achar could stay inside for only about an hour at a time. “The suit doesn’t let your skin breathe. It can’t,” he says. “You get very, very hot.” And even while inside, so much of his time was spent not treating the patients, but merely recording their medical information—a tedious but necessary part of containing an epidemic that has now claimed an estimated 10,000 lives. Due to the risk of contamination, he would take notes on paper, walk the paper to the edge of the enclosure, shout the information to someone on the other side of a fence, and later destroy the paper. “The paper can’t come out of the high-risk zone,” he says.


Looking for a better way, he phoned Ivan Gayton, a colleague at the MSF home office in London. Gayton calls himself a logistician. He helps the organization get stuff done. In 2010, he tracked down someone at Google who could help him use its Google Earth service to map the locations of patients during a cholera epidemic in Haiti. As part of its charitable arm, Google.org, the tech giant runs a “crisis response team” that does stuff like this. So, after talking to Achar, Gayton phoned Google again, and the company responded with a new piece of tech: a computer tablet that could replace those paper notes and all that shouting over the fence.


The Tablet You Dunk in Chlorine


Over the next few months, drawing on employees from across the company, Google helped build a specialized Android tablet where Achar and other doctors could record medical info from inside the high-risk zone and then send it wirelessly to servers on the outside. Here in everyday America, a wireless tablet may seem like basic technology. But in the middle of an Ebola epidemic in West Africa, which offers limited internet and other tech infrastructure, it’s not.


Ivan Gayton. Ivan Gayton. Credit: Médecins Sans Frontières.

The tablet is encased in polycarbonate, so that it can be dipped in chlorine and removed from the facility, and the server runs on battery power. “There was a real need for this,” says Dr. Eric D. Perakslis, part of the department of biomedical informatics at the Harvard Medical School, who has closely followed the project. “It’s very impressive, and it’s unique.”


The system is now used by Achar and other doctors in West Africa, where patients are still being treated. During the testing phase, the server ran off a motorcycle battery, but now it includes its own lithium ion batteries, much like those in your cell phone, which can charge via a portable generator. Then, inside the high-risk zone, Archar can not only wirelessly send data over the fence, but also readily access information he didn’t have before, including a patient’s latest blood test results. Plus, after dipping the thing in chlorine for ten minutes, he can take it outside the zone and continue working with it after removing his moon suit.


Yes, the Ebola epidemic appears to be wane. But the system provides a blueprint for future. After catching wind of the project, Perakslis says, he’s working to help MSF “open source” the technology, freely sharing the software code and hardware designs with the world at large. The hope is that system could also be used to battle others epidemics, including cholera outbreaks, and perhaps help with medical research, including clinical trials for drug-resistant tuberculosis. “You can think of other highly toxic environments, even laboratory environments, where this could really be helpful,” Perakslis says.


Fighting Disease Like a Tech Company


But it could also provide a path to all sorts of other new technologies for fighting disease and illness in developing countries. If tech is open source, you see, you can not only use it for free, but modify it. This is actually what MSF and Google themselves did in creating their system for the Ebola wards. In fashioning the software that runs on the tablet and server, they built atop an existing open source medical records tool called OpenMRS. One technology is just a starting point for another.


What’s more, says Ivan Gayton, the project offers a lesson in how organizations like MSF should operate. In the past, they operated according to carefully organized hierarchies of employees. And they were forced to use what came down from the big software and hardware sellers. But the tablet project was an almost ad-hoc collaboration. Achar phoned Gayton. Gayton phoned Google. Soon, Google sent about a dozen employees to London, including Google Drive project manager Ganesh Shakar, who was living in Australia. Later, Gayton says, MSF roped in several other volunteer techies from outside the organization, including a 19-year-old gaming entrepreneur.


Finally, various parts of the team, spanning multiple organizations, flew down to Sierra Leone to test and deploy the system in the real world. Organizations like MSF don’t typically work in this way, Gayton explains. And they should.


“We’ve learned new ways of doing things,” he says. “In the past, we used the Roman-legion, hierarchical, triangle structure. But Google and the tech volunteers we work with organize in different ways—ways more like what you see with open source projects like Linux, with more or less one manager and then a bunch of equal peers. That can have profound implications for the humanitarian field.”



So, Arkansas Is Leading the Learn to Code Movement

Arkansas Gov. Asa Hutchinson speaks to legislators in Little Rock, Ark., Jan. 22, 2015. Arkansas Gov. Asa Hutchinson speaks to legislators in Little Rock, Ark., Jan. 22, 2015. Danny Johnston/AP



Arkansas may be one of the last states that comes to mind when you think of major hubs of tech talent. And yet, last month, it became the first to pass a truly comprehensive law requiring all public and charter high schools to offer computer science courses to students, beating better known tech centers like California and New York to the punch.


If for one reason or another you’ve been following local Arkansas politics, this should come as no surprise. During his run for governor last November, Governor Asa Hutchinson made computer science education for all one of his core campaign promises. “It’s probably the first time in the history of politics that the word ‘coding’ was used in a political commercial,” Hutchinson tells WIRED.


It’s not because Hutchinson, former head of the Drug Enforcement Administration, has a personal passion for coding (he confesses he only recently learned what “Javascript” is), but because he believes fostering a generation of computer science-savvy graduates will give an unprecedented boost to the Arkansas economy in years to come.


“Whether you’re looking at manufacturing and the use of robotics or the knowledge industries, they need computer programmers,” he says. “If we can’t produce those workers, we’re not going to be able to attract and keep the industry we want.”


The Call for Coding


That computer science ought to be a fundamental part of every child’s education was once a refrain sung only by the Silicon Valley set. Now, however, it’s being echoed by government officials from both sides of the aisle, in every corner of the country, and even at a federal level. Just last week, President Obama announced the launch of the TechHire initiative, a new program that aims to connect more people with coding classes and more employers with this new cohort of tech workers. As the president noted in his speech, two-thirds of the country’s tech jobs exist in non-tech industries, meaning the need for tech talent extends to places that aren’t necessarily nerve centers of tech activity, like, well, Arkansas.


“People don’t realize in every single state and every single industry there’s a shortage of computer engineers and software engineers,” says Hadi Partovi, co-founder and CEO of Code.org, a non-profit organization that advocates for computer science education in schools and builds tools to help students learn to code.


Learning to use Facebook is far less educational than learning to make the next Facebook. Hadi Partovi


According to Partovi, other states have taken half-steps toward similar legislation. South Carolina, for instance, lists “computer science” as a high school graduation requirement, but that credit can be filled by learning basic skills like keyboarding. Other states require tech education courses, but Partovi says, that can mean something as simple as learning to use social media. “Calling it ‘computer science’ is confused,” he says. “Learning to use Facebook is far less educational than learning to make the next Facebook.”


Other states, like Washington, are working to expand computer science courses or to ensure these courses count toward graduation instead of as an elective, but that doesn’t guarantee all schools will actually teach it. And last summer, Texas quietly mandated computer science education in schools. But so far, Partovi says, the rule has gone unenforced and unfunded.


The Need for Teachers Who Code


All of which makes what Arkansas just did particularly noteworthy. The state didn’t just pass a law. It also set aside $5 million to get this new program off the ground in Arkansas schools this fall. That money will not only fund teacher training, but it will also be used to reward schools that have high performance and enrollment rates in the new courses, which are not mandatory for students. “It’s a way to put a larger investment into it and make the whole program stronger and more long lasting,” Hutchinson says. “It’s a small investment with the opportunity for a huge return.”


Still, he admits not everyone was thrilled with this new initiative, least of all the educators themselves. Currently, he says only about 20 teachers in the entire state are “properly prepared” to teach these new courses, which makes teacher training a monstrous undertaking. “There’s a lack of confidence and comfort,” he says. “That comfort level needs to change.”


To ease the transition, the state is offering schools free access to an online education portal called Virtual Arkansas, which can supplement training for both teachers and students.


“These days you can have lectures delivered via video and problem sets that grade themselves,” says Partovi, who is working with the state to bring Code.org’s tools to schools as well. “All of this reduces how much effort the teacher needs to put in and reduces the total cost of training the teacher to do that work.”


Computer Science Everywhere


Which begs the question: why do students need to learn computer science in schools, when the internet can teach them everything they need to know? For starters, Partovi says, students who take coding courses in school have a much higher completion rate than those who learn on their own time.


But there’s more to it than that, he says. When computer science is offered during the school day, it means every kid gets a chance to learn, regardless of whether they have a computer at home or a role model encouraging them to pursue a career in tech. Bringing coding to schools can equalize access to and interest in coding, an important step in bringing some much-needed diversity to the tech field. Partovi says this effect is already showing up on Code.org, where 43 percent of its 5 million students are girls, and 37 percent of students are black and Hispanic.


“It’s a really great sign for what’s going to happen for tech diversity in about 10 years,” Partovi says.


But even if the diversity and employment arguments don’t move you, Partovi says there’s a much more basic reason for more states to follow Arkansas’ lead. “No matter what you want to major in, computer science is now impacting the world at a foundational level,” he says. “You learned about gravity and the digestive system, not because you became a physicist or biologist. It’s just learning about how the world works, and for today’s kids, learning how technology works is equally foundational.”