What if an AI was innocent until proven guilty? That’s the question I had to ask

The authors of The Singularity is Near: When Humans Transcend Biology have a lot to say about thinking about robots before we actually have them around our towns and cities. In fact, they point out that humanity’s first artificial intelligence (AI) machine, Isaac Asimov’s first law of robotics, predates artificial intelligence itself. All AI machines have one thing in common, they have to be all happy, no matter how stupid they are, no matter how destructive they are, and therefore in order to survive and thrive, they have to be happy. I think this is a great, life-saving law, but I also think it is simplistic and overly symbolic, mainly because what it ignores is the meaning of goodness, and is often seen as highly subjective.

Which is why the authors of a recent article in the New York Times share their thoughts on what an AI might think. “I doubt it would develop its own opinions,” they said. “It would simply act as a weapon of mass destruction. It would use both moral and ethically neutral information about the value of something, then fire its firepower at its opponent.”

I’ve asked them to explain themselves more, which they did to me in a group e-mail. They’re keenly aware of where people may see a robot as racist, sexist, homophobe or just some sicko, and so in addition to answering a variety of questions about morality of an AI that would be labeled innocent until proven guilty, I also asked whether they would in any way indicate whether it was a good or bad thing to be socially responsible.

So, how, they asked, should an AI be judged? “In our view, the best way to judge an AI is to see it as an individual,” they said. “You could ask whether it was able to perform well, if it was accountable for its actions and its own cognitive life, and if it has an ethical compass. In order to understand where a robot was coming from, you could measure its moral reasoning and decision-making.”

Another reason for looking for moral meaning in AI is to help understand the degree to which the machines could think rationally. Good AI might represent the possibility of a fully automated future, that would include cleaner environments and safer society in the long term, but, clearly, such a future is not within the collective imagination of humans.

Lastly, I asked the authors whether they themselves believe there will be a day when we’ll all be living in a totally-automated society. “Unfortunately, we have not come to grips with the fundamental technological advances that ultimately lead to that future,” they wrote. “AI and automation are changing society faster than we can adapt our way of living and using our resources.”

This is an important point, as it is a legacy of our days and rules about caring for autonomous, thinking machines. Indeed, at the same time as the authors of The Singularity is Near are explaining their belief that there is hope for AI to make society stronger and more co-operative, they also bring up the other possibility of this technology being used to isolate more humans and engage in more disturbing human behavior.

A final thought is that although I haven’t read the book, the authors of this article introduce themselves by saying that they are dedicated to “othering,” the practice of seeking to separate a tech-savvy reader from their concern for the planet and its natural environment. I see all of this as in very much the spirit of the Singularity, where computers will drive out humans, and the tectonic shifts that will cause people to forgo intimate community and systems that hold us together.

Leave a Comment