, , ,

caution IMG_1172The title of this series of blogs is a play on a nice little book by Alan Lightman called “Einstein’s Dreams” that explores various universes in which time operates in different ways. This first blog lays the foundation for these variations on how “The Singularity” might play out.

For those who have not heard the term, “The Singularity” refers to a hypothetical point in the future of human history where a super-intelligent computer system is developed. This system, it is hypothesized, will quickly develop an even more super-intelligent computer system which will in turn develop an even more super-intelligent computer system. It took a fairly long time for human intelligence to evolve. While there may be some evolutionary pressure toward bigger brains, there is an obvious tradeoff when babies are born in the traditional way. The head can only be so big. In fact, human beings are already born in a state of complete helplessness so that the head and he brain inside can continue to grow. It seems unlikely, for this and a variety of other reasons, that human intelligence is likely to expand much in the next few centuries. Meanwhile, a computer system designing a more intelligence computer system could happen quickly. Each “generation” could be substantially (not just incrementally) “smarter” than the previous generation. Looked at from this perspective, the “singularity” occurs because artificial intelligence will expand exponentially. In turn, this will mean profound changes in the way humans relate to machines and how humans relate to each other. Or, so the story goes. Since we have not yet actually reached this hypothetical point, we have no certainty as to what will happen. But in this series of essays, I will examine some of the possible futures that I see.

Of course, I have substituted “Turing” here for “Einstein.” While Einstein profoundly altered our view of the physical universe, Turing profoundly changed our concepts of computing. Arguably, he also did a lot to win World War II for the allies and prevent possible world domination by Nazis. He did this by designing a code breaking machine. To reward his service, police arrested Turing, subjected him to hormone treatments to “cure” his homosexuality and ultimately hounded him literally to death. Some of these events are illustrated in the recent (though somewhat fictionalized) movie, “The Imitation Game.”

Turing is also famous for the so-called “Turing Test.” Can machines be called “intelligent?” What does this mean? Rather than argue from first principles, Turing suggested operationalizing the question in the following way. A person communicates with something by teletype. That something could be another human being or it could be a computer. If the person cannot determine whether or not he is communicating with a computer or a human being, then, according to the “Turing Test” we would have to say that machine is intelligent.

Despite great respect for Turing, I have always had numerous issues with this test. First, suppose the human being was able to easily tell that they were communicating with a computer because the computer knew more, answered more accurately and more quickly than any person could possibly do? (Think Watson and Jeopardy). Does this mean the machine is not intelligent? Would it not make more sense to say it was more intelligent? 

Second, people are good at many things, but discriminating between “intelligent agents” and randomness is not one of them. Ancient people as well as many modern people ascribe intelligent agency to many things like earthquakes, weather, natural disasters plagues, etc. These are claimed to be signs that God (or the gods) are angry, jealous, warning us, etc. ?? So, personally, I would not put much faith in the general populous being able to make this discrimination.

Third, why the restriction of using a teletype? Presumably, this is so the human cannot “cheat” and actually see whether they are communicating with a human or a machine. But is this really a reasonable restriction? Suppose I were asked to discriminate whether I were communicating with a potato or a four iron via teletype. I probably couldn’t. Does this imply that we would have to conclude that a four iron has achieved “artificial potatoeness”? The restriction to a teletype only makes sense if we prejudge the issue as to what intelligence is. If we define intelligence purely in terms of the ability to manipulate symbols, then this restriction might make some sense. But is that the sum total of intelligence? Much of what human beings do to survive and thrive does not necessarily require symbols, at least not in any way that can be teletyped. People can do amazing things in the arenas of sports, art, music, dance, etc. without using symbols. After the fact, people can describe some aspects of these activities with symbols.But that does not mean that they are primarily symbolic activities. In terms of the number of neurons and the connectivity of neurons, the human cerebellum (which controls the coordination of movement) is more complex that the cerebrum (part of which deals with symbols).

Fourth, adequately modeling or simulating something does not mean that the model and the thing are the same. If one were to model the spread of a plague, that could be a very useful model. But no-one would claim that the model was a plague. Similarly, a model of the formation and movement of a tornado could prove useful. But again, even if the model were extremely good, no-one would claim that the model constituted a tornado! Yet, when it comes to artificial intelligence, people seem to believe that if they have a good model of intelligence, they have achieved intelligence. When humans “think” things, there is most often an emotional and subjective component. While we are not conscious of every process that our brain engages in, there is nonetheless, consciousness present during our thinking. This consciousness seems to be a critical part of what it means to have human intelligence. Regardless of what one thinks of the “Turing Test”, per se, there can be no doubt that machines are able to act more accurately and in more domains than they could just a few years ago. Progress in the practical use of machines does not seem to have hit any kind of “wall.”

In the next blog, we begin exploring some possible scenarios around the concept of “The Singularity.”