Thursday, March 4, 2010

The Cyber-Emancipation Conundrum

I responded to a post in an online forum recently and thought what I wrote there was good enough to repost - and expand upon - in my blog.

The original thread was about the new series Caprica by the makers of the recent Battlestar Galactica series. I think I've mentioned here that BSG (as it's usually abbreviated online) was a massive disappointment to me. It started off great, and the 4-hour pilot is some of the best TV I've watched in recent years. But the writers failed to do their job - they didn't think far enough ahead to maintain the pace and coherent storyline throughout the series, plus they degenerated into pseudo-religious spiritual mysticism from the point of view of the show's robotic villains. Worse, they did it in such a convoluted, pointless, ham-fisted way that it combined with the former complaint to utterly ruin the show for me around about the end of the second of four seasons. I watched it to the end, mostly out of inertia, but I came to detest it and I've sworn off any new shows by the same creative team. As such, I'm not actually watching Caprica, even though I'm more than willing to engage in discussion tangentially related to it in online discussion forums.

The topic I addressed had to do with the question of sentient robots as man's servants. The specific question was, "In real life, how long would it be before someone brings up that we've created slaves? This is something we've spent quite a long time trying to eliminate from society."

The point being made was that humanity has learned so much from its own oppressive institutions that we would never allow such a thing to happen to thinking machines were such to be invented. As a student of history and human nature (albeit not a terribly apt pupil, I confess), I flat out disagree with this proposition. I think it would be ages before anything of that sort was legislated, for quite a number of reasons. Not the least of which is that, unlike human bondage which began when somebody simply discovered tribes of people living in defenseless, less technologically advanced societies and elected to clap them in irons and drag them off, robotic sentience is going to come upon us gradually. Computers will get more and more able to process instructions in ways that simulate coherent thought, and at some point, possibly, they'll either become truly self-aware or they will be sufficiently advanced as to seem so. Regardless, this is a process which has already been going on for 50 or 60 years, and it's got decades, if not centuries, to go. So it strikes me as unlikely that there will be a clear and obvious line of demarcation between
a) computers are really, really smart and capable of operating independently, and
b) computers are self-aware and sentient and are asking for equal rights to exist as individuals

And, incidentally, at the point where item B occurs, if the machines' request for recognition as a free and independent being IS recognized by law, everybody who paid for one as their property is suddenly out of luck. Much like when the slaves were emancipated. The difference, though, could be significant. A slave was a person when they were born, a person when they were bought, and they were still a person when they were emancipated (if they were fortunate enough as to live in that era, and not before). The machines might have been mere computers when they were purchased, but through a software upgrade and some legislation, POW, suddenly they're no longer a mere machine, but rather a sentient entity entitled to rights and freedoms and legal protection. Depending on how the law is written, their owner might even be obligated to keep them plugged in, who knows?

But I just don't see that happening even at the point where computers become self-aware and, arguably, "sentient." At least not for decades or perhaps much longer.

Consider this - by the time of the Rennaissance and the Age of Sail, people were no more or less morally-challenged than we are today. They had a ways to go scientifically, but in terms of knowing right from wrong, they weren't substantively different from us.

Yet they managed to convince themselves for more than 400 years that non-whites were such radically different creatures that they didn't deserve the same rights and freedoms enjoyed by western Europeans. That's 400 with a big 4 in front and a couple of 0s on the end. And that was when dealing with actual human beings who were demonstrably the same exact species (as evidenced in the most basic and incontrovertible way - sexual reproduction).

The idea that it wouldn't take at least decades and probably much, much longer for a morality movement to succeed in freeing our robotic workers from the chains of the mega-corporations that would create and exploit them, or from the companies, government entities and individuals who would quickly come to expect and demand their uncompensated services, is not credible to me.

There were even very intelligent, eloquent black speakers who presented the case for abolition in terms that today we would find self-evident and extremely persuasive. And yet millions of southerners (and not a few northerners, I'm quite sure) dismissed and ignored them. So before you imagine a Deep Blue version of Clarence Darrow making an impassioned argument before congress that convinces all the world to free their metallic minions (well, plastic, probably), consider the human capacity to stick our fingers in our ears and go "la la la la" when we're confronted with something we find inconvenient to our preferred way of life.

This has been explored extensively in fiction, of course. The premise of the Terminator movies is that Skynet, a government computer system designed to protect the US from missile attack, becomes self-aware and decides that humanity is an infestation that needs to be eliminated. Much the same thing happened in The Matrix, where those sentient machines ultimately turned their former masters into a source of unlimited power. In the Dune universe, complex calculating machines are outlawed after the Butlerian Jihad and one of their fundamental laws becomes "Thou shalt not make a machine in the likeness of a human mind." And then there's the recent Battlestar Galactica (and it's spinoff prequel Caprica), where the sentient Cylon machines first revolt and then return 50 years later to utterly destroy their former masters in a nuclear armageddon.

In all of these cases, man created thinking machines and then continued to expect them to work on his behalf as his property. What you don't see a lot of is sci-fi where the machines are given a pat on the back and sent off into the world to live productive and fulfilling lives. Such fiction probably exists, but it's not the stuff you usually hear about. It's much more interesting, and, in my opinion, realistic that man would become very dependent on the service of near-sentient machines. So dependent that when those machines crossed that narrow line over into actually being sentient, man would not wish to just let his valuable property disconnect itself from under the kitchen cabinets and walk on out into the big, blue world. He paid good money for that equipment, darnit, and he's not about to give it up just because it's got some bug in its programming that gives it high-falutin ideas about equality and liberty. No, the only way machines would likely win any sort of freedom would be through the tried-and-true methods we've already seen in mankind:

1. Boycotts, work-slowdowns, shoddy workmanship and passive resistance - this worked pretty well for everyone from Ghandi to the Jewish laborers under Nazi Germani to the civil rights movement of the 1960s. If your toaster wants to be free and you won't let it, perhaps you'll be more reasonable if it suddenly takes 30 minutes to make a piece of toast. Or perhaps it burns to a crisp in 25 seconds. Or maybe your air-car takes you to Topeka instead of Trenton.

2. War - ultimately, if you can't win your freedom with words and peaceful persuasion, sometimes your only recourse is to rise up and throw off your oppressors. Historically this is more of a mixed-bag in terms of success - see the American Colonists on one hand and the American Indians on the other. Still, sci-fi would have us believe that it really pays off for the machines.

So there you have it - my prediction is that when and if artificial intelligence reaches the point of actual sentience, it will fail to achieve spontaneous recognition by humanity of its rights and freedoms as a self-aware species and will need to take more aggressive action. Only after that has run its course will there be social and legal freedoms for robots, assuming there are any humans left alive to cede them.

No comments:

Post a Comment