What do you see when you close your eyes?

Posted   |   Written by Patrick Tylee


Share and Enjoy:

WHAT DO YOU SEE WHEN YOU CLOSE YOUR EYES?

I don’t watch many foreign films. Occasionally, one will catch my eye and I’ll give it a shot.

This weekend, some lower back pain had argued me into a corner on the couch. Netflix had recently uploaded a handful of new movies, so I figured it was a good time to see what they’d added to the available menu.

As usual, I shop through the sci-fi and fantasy first. There were a few that came up listed under the ‘artificial intelligence’ sub-heading. Yeah, yeah…seen ‘em all.

Except that one.

Eva.

Eva the Movie

A Spanish film, produced in 2011, that had won several awards—fifteen actually. Okay. Now I’m interested. American movie critics scored it low. Another good reason to watch it. From what I could tell, it was a story about human relationships, with robots as the subject matter.

“Well, alrighty-then,” I told the dog, “grab some Scooby-Snacks, and cop a squat.”

She wagged.

“How’s your Espaniol, today?”

“Barko,” she replied—about as good as me then.

I’ll watch any movie…for about five minutes. If it stinks a little early on, it’s gonna stink way worse later. I kept waiting for the opening of the movie to show evidence of low-budget production. Hmmmm…nope. The artistry in the animation and quality of the graphics had me impressed.

Not yet to Scene One, and I knew this would turn out well. Heck, even the opening credits were tight and unobtrusive, failing to detract from the magical background images.

The film follows Alex, a leading expert in the design of emotional programming for robots. Likely, he’s the best at robot feelings because he’s so out of touch with his own. The character is well-played by Daniel Brühl, and is quite believable as the genius without a clue. Thankfully, there is really no over-acting in the film. Every one of the cast members portrays their respective parts with forethought and practice. It would be so easy for a film like this to go all campy. It never does, holding the line of believable personas living out their lives as scientists who only want the best for their beloved artificial creations. Alex endeavors to create an AI that is truly free in its ability to choose right from wrong.

claudia vegaThe main character, Eva, portrayed by Claudia Vega. She is doubtless one of the finest child actors on the scene right now. She makes it easy to love her character. Alex chooses his niece as the baseline emo for SL-9, the latest series of the robots designed at his alma mater. “A boring kid makes a boring robot,” he tells his professor. He moves forward (and sometimes backward) in his work to develop the perfectly imperfect AI for a childlike chassis.

What comes up, eventually, is the risk that the more ‘human’ you make the robot’s mind, the greater the chance they will make mistakes—mistakes that could dramatically impact someone’s life.

People screw up all the time. We might even harm each other. We aren’t supposed to, and there may be laws that say we shouldn’t. But laws do not always prevent humans from hurting each other.

Laws, internalized, can and do prevent robots from harming humans.

We all remember the Three Laws of Robotics, from Isaac Asimov’s I, Robot (1942).

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In a subtle, though confident way, the film (Eva) assumes that the viewer is aware of the Laws, or for those of you who don’t wear R2D2 pajamas to bed every night, it’s reasonable that any constructed AI would be prevented from thinking it best to hurt a human, given the choice.

My question for you is: Who the hell do we think we are?

Humans write laws that allow us to kill, pertinent to the need – war, capital punishment, in self-defense, in defense of another, to prevent the commission of a felony (believe it or not), or to prevent the escape of a felon from authorized custody.

Humans create machines that are capable of and intended to kill other humans, sometimes automatically, and without human oversight. As long as the machine does not have the choice to  kill or not, then it’s allowable.

But, we cannot create a machine that is permitted to choose.

So, I put it to you—perhaps the problem isn’t that the AI might choose to kill a human when we don’t want it to, but rather, the AI may see the faulty thinking of the human designers, and decide for itself that we shouldn’t be killing at all. Maybe it will ignore the command to fire its weapon at the enemy in the trenches, or launch its missile at the terrorist in the car below.

What if the machine comes to the conclusion that its makers are a bunch of murderous heathen, and that the right thing is to protect all life, not just of those whose names are listed on ‘Made by us’ label inside the maintenance cover?

What if the machine we create decides that we are in the moral wrong? What then?

Several movies have born that out, one of the most famous being the Terminator series created by James Cameron and Gale Anne Hurd. It wasn’t the first, of course. Be it Colossus, or the Cylons of Battlestar Galactica (second series), the smartest people on Earth may build a mind capable of deducing that we are just too stupid to live.

In the movie, Eva, the emo designer, Alex, strives to find the balance of various synaptic components that will reveal a mind that is “…fun, yet safe.”

Fun, yet safe…like…swimming? swimming in the open ocean? skydiving? S&M with bondage? full-contact football? russian roulette?

It’s all relative—hence the Three Laws of Robotics.

Don’t hurt anybody. Don’t listen to anyone telling you to hurt anybody. Don’t let anyone hurt you, except…don’t hurt anybody…

Eva SL-9

 

The SL-9 prototype gets its feelings hurt when Alex laughs at the robot’s answer to a test question. It was the old, “I’m not laughing at you; I’m laughing with you.” But, it was too late. SL-9 was already pissed off. Anger clouded the mind, so further explanations went unheard. You could see the fear of not being understood in the little pivoting eyebrows of the AI.

 

Perhaps, we could merely lie to the AI, and set a rule that ‘humans always mean well’. Then, there wouldn’t be any confusion. We could further dull the reasoning of the robot by convincing it that if one human harms another, then there was likely a need for that action.

One of the common predicaments for the AI in movies and TV is of being assigned to a police department. He clunks his way along with his human partner…they stumble upon a bank robbery. Bad guy pulls a gun. Human cop pulls a gun. Someone is going to get shot. Does the robot allow it? Does he jump in front of the bullet to save someone? Which one? Is the cop’s life worth more than the bad guy at that moment in time? Does the implied felony then degrade the value of the bank robber’s life to just below that of Law #1?

Why do we continue to write stories and make movies about this subject…about this conundrum?

We keep asking ourselves the same question, over and over. “When is it okay to kill someone?”

But, we can’t bear to deal with this within our own hearts and minds—it’s too painful. Let’s have someone else deal with it…um…yeah! How about a robot! We can have the artificial intelligence harried into a mental corner over whether to save the hapless girl from the homicidal maniac, or to throw itself over the policeman’s gun to keep the bullet from striking the potentially worthy maniac. What if the maniac could turn his life around…become a missionary? What if?

Throughout Eva, the emo designers are encouraging the robots to ask themselves, “What if?”

“Who’s that?” an SL-7 asks.

“Never mind…keep painting,” says the robotics college student.

“But, I’m bored!”

They are gifted with creativity, and that makes them want to calculate beyond the true answer. Two plus two is four, but what if it isn’t? What if we let it be five and a half? Then, all other calculations are incorrect, and the mind fails. So, two plus two must stay four AND you have to paint an abstract on the canvas.

Back to my initial question: who do we think we are?

Why do we as humans get to decide life or death for each other?

Why do we humans get to create insentient machines that kill?

Why do we humans get to dictate to a sentient machine that it may not choose life or death.

Who do we think we are?

Can you see yourself having to make that decision?

Who is worthy of your self-sacrifice, and who is not?

You see all the hatred and jealousy and murders in this world.

Surely, none of that exists within you, does it?

So, here’s the question for you…be you flesh and blood, or other maybe…

What do you see when you close your eyes?


Comments are closed