Home » Computers » AI – Yi Yi!

AI – Yi Yi!

Admit it – you’ve heard that Asia is now manufacturing “dolls” that will obey your commands.

 For about $10K —and evidently a waiting list— it’s yours. And not only will the dolls obey, but they will “feel,” and express a range of emotions supposedly true to life. Creepy factor aside, though, our AI quotient has ramped up a lot in recent years, bringing with it all those problems and questions Sci-Fi writers imagined way back when I was a kid just getting hooked on it.

One of the first such stories I vividly recall reading (though certainly not the first to consider the question) was the Philip K. Dick classic novella, Do Androids Dream of Electric Sheep? published in 1968 and later retitled Blade Runner: Do Androids Dream of Electric Sheep? It seems 1968 was a big year for considering AI, because that was also the year that Stanley Kubrick released 2001: A Space Odyssey. (It was also a good year for titles with colons.)

If you saw it, in the Kubrick film, HAL (one letter over from IBM in the alphabet) was the space ship’s controlling computer, who actually had insider knowledge of the mission, and became, by the end of the movie, an actual character who played a key, even heart-rending role in the drama.

And in Dick’s fiction, “replicants,” or humanoid robots, have been imbued with more than just the
enhanced physical and intellectual power of a human, but with feelings. One of the most important —and moving— characters in the story is a replicant whose feelings, it turns out, are as deep as, or deeper than the humans’ in the tale. And it’s hinted that perhaps more than just he is a robot, but that we’re actually seeing the story through the eyes of one who doesn’t realize that he isn’t fully human.

While the aforementioned robotic “dolls” would indicate that the days of these 1968 fantasies are closer than we think, in fact we humans have been groomed for the possibility for much longer than we realize. I don’t say this in the sense of purposefully prepared, but more like we have been simplyaccepting the increasing level of sophistication in our electronic servants, without realizing that the more we expect from them, and the more they can supply, the more we begin to interact with them on the level of not just intellect and function, but emotion and even caring.

Take that phone in your hand – no, really, you know how you feel when it’s not right there nearby. Call it inconvenienced, or worried about where you may have left it, or call it something bordering on panic (the kind you feel when you look down and your 4 year old isn’t right next to you in the store), we have all felt that moment of loss when we look down and our little buddy —the one we decorate with special cases, the one that stores our pictures and dates and names, the one ready to play a game, take a photo, share an idea, shop, play music (and do it all without complaining even when we stop the song right in the middle)— isn’t there, just waiting for whatever it is we want to do next.

I’m sure you realize that your memory for things like names, dates, movies, directions, languages, and history have all been compressed to a very limited repertoire – because, after all, your phone, laptop, or tablet all have the answers for us, so why waste the brain cells? (What are we doing with all those spare brain cells, one wonders?)

And of course, we’ve all seen and laughed at the photos of people in a restaurant, supposedly sharing a meal or a drink, all staring fixedly at their phones rather than interacting.

And now there is Alexa, Siri, and “Hey Google.”

When Siri was first launched, I watched a small child laugh herself silly having an argument with “her.” I resisted using the function, but recently I have found that if I want to send a text or make a phone call while driving, it’s awfully easy to ask “Siri” to do it.

Alexa has been both an exciting leap forward in convenience, and a bit of a scare when you realize that “she” is listening —waiting for you to ask her to remind you of something, turn the music down, add to the shopping list, or even suggest a chore for you to do— a sort of game that makes getting the housework done more fun and interesting.

Now for a moment of pure strangeness: I wanted to be reminded of the 2001 film (the date in and of itself is a little unsettling) A.I. Artificial Intelligence. Typing “AI year” into the search on my computer, I was returned “About 1,400,000,000 results (0.86 seconds)” by Google, and a list of all my upcoming calendar dates (Google calendar), as well as the information about the film.

A.I. Artificial Intelligence, of course, was a film about a robot child who was programmed to love. The film explored the idea of robots and emotions, and when and where we draw the line in defining “human” and “machine.”

A recent test was conducted to determine whether human beings would, could, feel sorry for a robot that was “in pain.” It turns out that, unsurprisingly, yes, they could, and did. Unlike Dave in “2001,” human beings actually, in 2015, felt bad when a robot was shown being injured, or spoke with sad tones. Dave unplugged HAL with little remorse, though the scene of HAL dying was disturbing. Human beings today find it difficult to see a robot suffer.

The unsurprising fact is that, as our machine friends have become increasingly able to interact with us (and we with them) and as we have become increasingly more dependent upon them to perform functions for us (give us step by step directions to our destination, pop up a reminder for an appointment, lull us to sleep with our favorite music, adjust the temperature of house before we’re due home from work), we tend to think of them as “real” (do you talk to your computer when it’s misbehaving or not responding quickly enough?) and, in fact, let our guards down in terms of how much of our lives we commit to their care.

Sooner or later, though, we will reach that predicted point of fiction at which we’ll need to consider the question that has been asked since at least Mary Shelley considered it in Frankenstein; or, The Modern Prometheus. Given the power, should we create a new sentient life form? Or is the question better put, now that we’ve done it, what follows?

Nancy Roberts