Tags
AI, Artificial Intelligence, cognitive computing, emotional intelligence, the singularity, Turing, Tutoring
Tutoring Intelligent Systems.

Learning by modeling; in this case by modeling something in the real world.
Of course, the title of the chapter is a take off on “Intelligent Tutoring Systems.” John Anderson of CMU developed (at least) a LISP tutor and a geometry tutor. In these systems, the computer is able to infer a “model” of the state of the student’s knowledge and then give instruction and examples that are geared toward the specific gaps or misconceptions that that particular student has. Individual human tutors can be much more effective than classroom instruction and John’s tutor’s were also better than human instruction. At the AI Lab at NYNEX, we worked for a time with John Anderson to develop a COBOL tutor. The tutoring system, called DIME, included a hierarchy of approaches. In addition to an “intelligent tutor”, there was a way for students to communicate with each other and to have a synchronous or asynchronous video chat with a human instructor. (This was described at CHI ’94 and available in the Proceedings; Radlinski, B., Atwood, M., and Villano, M., DIME: Distributed Intelligent Multimedia Education, Proceeding of CHI ’94 Conference Companion on Human Factors in Computing Systems,Pages 15-16 ACM New York, NY, USA ©1994).
The name “Alan” is used in the chapter to reflect some early work by Alan Collins, then at Bolt, Beranek and Newman, who studied and analyzed the dialogues of human tutors tutoring their tutees. It seems as though many AI systems either take the approach of trying to have human experts encode knowledge rather directly or expose them to many examples and let the systems learn on their own. Human beings often learn by being exposed to examples and having a guide, tutor, or coach help them focus, provide modeling, and chose the examples they are exposed to. One could think of IBM’s Watson for Jeopardy as something of a mixed model. Much of the learning was due to the vast texts that were read in and to being exposed to many Jeopardy game questions. But the team also provided a kind of guidance about how to fix problems as they were uncovered.
In chapter 15 of Turing’s Nightmares, we observe an AI system that seems at once brilliant and childish. The extrapolation from what the tutor actually said, presumably to encourage “Sing” to consider other possibilities about John and Alan was put together with another hint about the implications of being differently abled into the idea that there was no necessity for the AI system to limit itself to “human” emotions. Instead, the AI system “designs” emotional states in order to solve problems more effectively and efficiently. Indeed, in the example given, the AI system at first estimates it will take a long time to solve an international crisis. But once the Sing realizes that he can use a tailored set of emotional states for himself and for the humans he needs to communicate with, the problem becomes much simpler and quicker.
Indeed, it does sometimes feel as though people get stuck in some morass of habitual prejudices, in-group narratives, blame-casting, name-calling, etc. and are unable to think their way from their front door to the end of the block. Logically, it seems clear that war never benefits either “side” much (although to be sure, some powerful interests within each side might stand to gain power, money, etc.). One could hope that a really smart AI system might really help people see their way clear to find other solutions to problems.
.
The story ends with a refrain paraphrased from the TV series “West Wing” — “What comes next?” is meant to be reminiscent of “What’s Next?” which President Bartlett uses to focus attention on the next problem. “What comes next?” is also a phrase used in improv theater games; indeed, it is the name of an improv game used to gather suggestions from the audience about how to move the action along. In the context of the chapter, it is meant to convey that the Sing feels no need to bask in the glory of having avoided a war. Instead, it’s on to the next challenge or the next thing to learn. The phrase is also meant to invite the reader to think about what might come next after AI systems are able both to understand and utilize human emotion but also to invent their own emotional states on the fly based on the nature of the problem at hand. Indeed, what comes next?