An Ounce of Prevention: Chapter 5 of Turing’s Nightmares
Hopefully, readers will realize that I am not against artificial intelligence; nor do I think the outcomes of increased intelligence are all bad. Indeed, medicine offers a large domain where better artificial intelligence is likely to help us stay healthier longer. IBM’s Watson is already “digesting” the vast and ever-growing medical literature. As investigators discover more and more about what causes health and disease, we will also need to keep track of more and more variables about an individual in order to provide optimal care. But more data points also means it will become harder for a time-pressed doctor or nurse to note and remember everything about a patient. Certainly, personal assistants can help medical personnel avoid bad drug interactions, keep track of history, and “perceive” trends and relationships in complex data more quickly than people are likely to. In addition, in the not too distant future, we can imagine AI programs finding complex relationships and “invent” potential treatments.
Not only medicine, but health provides a number of opportunities for technology to help. People often find it tricky to “force themselves” to follow the rules of health that they know to be good such as getting enough exercise. Fit Bit and LoseIt and similar IT apps already help track people’s habits and for many, this really helps them stay fit. As computers become more aware of more and more of our personal history, they can potentially find more personalized ways to motivate us to do what is in our own best interest.
In Chapter 5, we find that Jack’s own daughter, Sally is unable to persuade Jack to see a doctor. The family’s PA (personal assistant), however, succeeds. It does this by using personal information about Jack’s history in order to engage him emotionally, not just intellectually. We have to assume that the personal assistant has either inferred or knows from first principles that Jack loves his daughter and the PA also uses that fact to help persuade Jack.
It is worth noting that the PA in this scenario is not at all arrogant. Quite the contrary, the PA acts the part of a servant and professes to still have a lot to learn about human behavior. I am reminded of Adam’s “servant” Lee in John Steinbeck’s East of Eden. Lee uses his position as “servant” to do what is best for the household. It’s fairly clear to the reader that, in many ways, Lee is in charge though it may not be obvious to Adam.
In some ways, having an AI system that is neither “clueless” as most systems are today nor “arrogant” as we might imagine a super-intelligent system to be (and as the systems in chapters 2 and 3 were), but instead feigning deference and ignorance in order to manipulate people could be the scariest stance for such a system to take. We humans do not like being “manipulated” by others, even when it for our own “good.” How would we feel about a deferential personal assistant who “tricks us” into doing things for our own benefit? What if they could keep us from over-eating, eating candy, smoking cigarettes, etc.? Would we be happy to have such a good “friend” or would we instead attempt to misdirect it, destroy it, or ignore it? Maybe we would be happier with just having something that presented the “facts” to us in a neutral way so that we would be free to make our own good (or bad) decision. Or would we prefer a PA to “keep us on track” even while pretending that we are in charge?