Tags
AI, Artificial Intelligence, cognitive computing, competition, cooperation, ethics, the singularity, Turing
Axes to Grind.
Why the obsession with building a smarter machine? Of course, there are particular areas where being “smarter” really means being able to come up with more efficient solutions. Better logistics means you can deliver items to more people more quickly with fewer mistakes and with a lower carbon footprint. That seems good. Building a better Chess player or a better Go player might have small practical benefit, but it provides a nice objective benchmark for developing methods that are useful in other domains as well. But is smarter the only goal of artificial intelligence?
What would or could it mean to build a more “ethical” machine? Can a machine even have ethics? What about building a nicer machine or a wiser machine or a more enlightened one? These are all related concepts but somewhat different. A wiser machine, to take one example, might be a system that not only solves problems that are given to it more quickly. It might also mean that it looks for different ways to formulate the problem; it looks for the “question behind the question” or even looks for problems. Problem formulation and problem finding are two essential skills that are seldom even taught in schools for humans. What about the prospect of machines that do this? If its intelligence is very different from ours, it may seek out, formulate, and solve problems that are hard for us to fathom.
For example, outside my window is a hummingbird who appears to be searching the stone pine for something. It is completely unclear to me what he is searching for. There are plenty of flowers that the hummingbirds like and many are in bloom right now. Surely they have no trouble finding these. Recall that a hummingbird has an incredibly fast metabolism and needs to spend a lot of energy finding food. Yet, this one spent five minutes unsuccessfully scanning the stone pine for … ? Dead straw to build a nest? A mate? A place to hide? A very wise machine with freedom to choose problems may well pick problems to solve for which we cannot divine the motivation. Then what?
In this chapter, one of the major programmers decides to “insure” that the AI system has the motivation and means to protect itself. Protection. Isn’t this the major and main rationalization for most of the evil and aggression in the world? Perhaps a super intelligent machine would be able to manipulate us into making sure it was protected. It might not need violence. On the other hand, from the machine’s perspective, it might be a lot simpler to use violence and move on to more important items on its agenda.
This chapter also raises issues about the relationship between intelligence and ethics. Are intelligent people, even on average, more ethical? Intelligence certainly allows people to make more elaborate rationalizations for their unethical behavior. But does it correlate with good or evil? Lack of intelligence or education may sometimes lead people to do harmful things unknowingly. But lots of intelligence and education may sometimes lead people to do harmful things knowingly — but with an excellent rationalization. Is that better?
Even highly intelligent people may yet have significant blind spots and errors in logic. Would we expect that highly intelligent machines would have none? In the scenario in chapter seven, the presumably intelligent John makes two egregious and overt errors in logic. First, he says that if we don’t know how to do something, it’s a meaningless goal. Second, he claims (essentially) that if empathy is not sufficient for ethical behavior, then it cannot be part of ethical behavior. Both are logically flawed positions. But the third and most telling “error” John is making is implicit — that he is not trying to dialogue with Don to solve some thorny problems. Rather, he is using his “intelligence” to try to win the argument. John already has his mind made up that intelligence is the ultimate goal and he has no intention of jointly revisiting this goal with his colleague. Because, at least in the US, we live in a hyper-competitive society where even dancing and cooking and dating have been turned into competitive sports, most people use their intelligence to win better, not to cooperate better.
If humanity can learn to cooperate better, perhaps with the help of intelligent computer agents, we can probably solve most of the most pressing problems we have even without super-intelligent machines. Will this happen? I don’t know. Could this happen? Yes. Unfortunately, Roger is not on board with that program toward better cooperation and in this scenario, he has apparently ensured the AI’s capacity for “self-preservation through violent action” without consulting his colleagues ahead of time. We can speculate that he was afraid that they might try to prevent him from doing so either by talking him out of it or appealing to a higher authority. But Roger imagined he “knew better” and only told them when it was a fait accompli. So it goes.