Robot Servants Are Going to Make Your Life Easy. Then They’ll Ruin It


Jibo-inline

Jibo



Jibo, the “world’s first family robot,” hit the media hype machine like a bomb. From a Katie Couric profile to coverage in just about every outlet, folks couldn’t get enough of this little robot with a big personality poised to bring us a step closer to the world depicted in “The Jetsons” where average families have maids like Rosie. In the blink of an eye, pre-orders climbed passed $1.8 million and blew away the initial fundraising goal of $100k.

But, should we let robot servants into our lives?



Evan Selinger


Evan Selinger is a Fellow at the Institute for Ethics and Emerging Technology who focuses on the collisions between technology, ethics, and law. At Rochester Institute of Technology, Selinger is Associate Professor of Philosophy and is affiliated with the Center for Media, Arts, Games, Interaction & Creativity (MAGIC).




Jibo is almost too adorable to resist. Sleekly designed with a curvy, clean-looking white enclosure and a dark round face, this teensy-weensy gadget looks downright adorable when doing what it does best: taking family pictures, reading stories to our kids, ordering our pizza, and just hanging out, being polite and sociable. While some might find Jibo over-priced or functionally limited, there seems little else to object to. Right? Not so fast.


Ironically, robot servants could end up diminishing our quality of life and character by doing our bidding.


Jibo poses a fundamentally existential problem: Is a life lived with a robot servant the kind of life we should want to live?



Will Robot Servants Make Us Worse People?


Robot servants promise to make things better by freeing up our time and eliminating our grunt work, yet, ironically, they could end up diminishing our quality of life and character by doing our bidding.


This problem doesn’t arise when all the robot servant is doing is unrewarding grunt work we all despise. Take familiar devices, like washing machines and vacuum cleaners. Most us of would declare victory if a fully automated robotic cleaner, like the new Dyson, could reliably get tricky jobs done while removing the human element entirely. Similarly, we don’t worry about passing the buck on activities we should be doing ourselves when, say, asking Siri what the weather is or dictating messages for her to compose and send. In fact, we like to extract maximum labor from Siri and even pose ridiculous questions when we’re bored.


Things begin to get complicated when robots go beyond basic manual, bureaucratic, and cognitive labor and become tools for us to outsource intimate experiences and functions to. Part of Jibo’s appeal is that it will let you to stop thinking. That is a disconcerting change, one which over time, can profoundly impact who we are. The issue concerns predictive technology, a feature that’s come to be an essential ingredient in the design of all kinds of digital assistance technology.


In the promotional video, Jibo isn’t just depicted as an educator and entertainer; Jibo is a mind reader. Coming home after what’s presumably been a long and grueling day, “Eric,” a businessman, turns to his robotic helper and says: “Can you order some takeout for me?” Jibo replies: “Sure thing. Chinese as usual?” Mouthing a line that could be an advertisement for any number of highly hyped “anticipatory computing” products, Eric responds: “You know me so well.”


What will happen to our inclination to develop virtues associated with willpower when technology increasingly does our thinking for us and preemptively satisfies our desires?


Now, computationally determining past dining patterns and inferring about likely future choices might not seem to be a big deal. “Eric” or any other user always can reject a recommendation. “Sorry, Jibo,” he might say, “but I prefer pizza.” But things become more complicated once we look past disconnected examples and examine the import of our decisions in light of pervasive patterns. Certainly, Jibo won’t be the only forecasting helper peddling prognostics—especially if the vision of smart homes associated with the Internet of Things comes to fruition.


From the perspective of consumers conditioned to want to minimize the effort required to conduct daily living, helper-houses, like robot servants, are a progressive step forward. But preeminent philosopher of technology Albert Borgmann asks us to consider what this advance will do to our capacity for reasoning. Will we be as inclined to ask ourselves questions like: What do I really want, and why should I want it? And what will happen to our inclination to develop virtues associated with willpower when technology increasingly does our thinking for us and preemptively satisfies our desires? Such an environment, Borgmann warns, initiates the “slide from housekeeping to being kept by our house.”


In this spirit, consider the next move being made by Siri’s creator, Apple. Aiming to make its mark on the predictive technology market that’s dominated by Google (think auto-complete and Google Now), Apple is generating buzz with announcements of QuickType, a new feature on its iOS8 operating system. The company boasts the upgrade is so effective that it “predicts what you’ll likely say next,” thereby allowing you to “write entire sentences with a few taps.” Not only does the technology take into account knowledge of your messaging style and favorite word choices, but it supposedly also “knows whom you’re writing to” and “what the conversation is about.”


It won’t be surprising if lots of users love this feature. After all, we’ll be able to communicate quicker and with less effort. But given the power of inertia, I worry we’ll be tempted to rely on it—and similar technologies—even when the recommendations aren’t perfect and even if, at times, they seem redundant. We’re susceptible to determining the output is good enough without fully considering whether frequent use disciplines us to become predictable, facsimiles of ourselves.


We’re used to thinking about the downside to robots taking our jobs and deciding whom to kill in battle and on the road. For obvious reasons, it’s much harder to be skeptical of non-lethal robot servants that have a job none of us want: doing our chores. But as technological helpers become more powerful, pervasive, and predictive we should remember that sometimes the best way to help ourselves is to refuse assistance.



No comments:

Post a Comment