• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: cognitive computing

Chapter 13: Turing’s Nightmares

17 Sunday Apr 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ 2 Comments

Tags

AI, Artificial Intelligence, cognitive computing, crime and punishment, ethics, the singularity

CRIME AND PUNISHMENT

PicturesfromiPhone2 033

Chapter 13 of Turing’s Nightmares concerns itself with issues of crime and punishment. Our current system of criminal justice has evolved over thousands of years. Like everything else about modern life, it is based on a set of assumptions. While accurate DNA testing (and other modern technologies) have profoundly impacted the criminal justice system, super-intelligence and ubiquitous sensors and computing could well have even more profound impacts.

We often talk of punishment as being what is “deserved” for the crime. But we cannot change the past. It seems highly unlikely that even a super-intelligent computer system will be able to change the past. The real reason for punishment is to change to future. In Medieval Europe, a person who stole bread might well be hanged in the town square. One reason for meting out punishment in a formal system, then as well as now, is to prevent informal and personal retribution which could easily spiral out of control and destroy the very fabric of society. A second rationale is the prevention of future crime by the punished person. If they are hanged, they cannot commit that (or any other) crime. The reason for hanging people publicly was to discourage others from committing similar crimes.

Today’s society may appear slightly more “merciful” in that first time offenders for some crimes may get off with a warning. Even for repeated or serious crimes, the burden of proof is on the prosecution and a person is deemed “innocent until proven guilty” under US law. I see three reasons for this bias. First, there is often a paucity of data about what happened. Eye witness accounts still count for a lot, but studies suggest that eye witnesses are often quite unreliable and that their “memory” for events is clouded by how questions are framed. For instance, studies by Elizabeth Loftus and others demonstrate that people shown a car crash on film and asked to estimate how fast the cars were going when they bumped into each other will estimate a much slower speed than if asked how fast the cars were going when they crashed into each other. Computers, sensors, and video surveillance are becoming more and more prevalent. At some point, juries, if they still exist, may well be watching crimes as recorded, not reconstructing them from scanty evidence.

A second reason for assuming evidence is the impact of bias. This is also why there is a jury of twelve people and why potential jurors can be dismissed ahead of time “for cause.” If crimes are judged, not by a jury of peers, but by a super-intelligent computer system, it might be assumed that such systems will not have the same kinds of biases as human judges and juries. (Of course, that assumption is not necessarily valid and is a theme reflected in many chapters of Turing’s Nightmares), and hence the topic of other blog posts.

A third reason for showing “mercy” and making conviction difficult is that predicting future human behavior is difficult. Advances in psychological modeling already make it possible to predict behavior much better than we could a few decades ago, under very controlled conditions. But we can easily imagine that a super-intelligent system may be able to predict with a fair degree of accuracy whether a person who committed a crime in the past will commit one in the future.

In chapter 13, the convicted criminal is given “one last chance” to show that they are teachable. The reader may well question whether a “test” is a valid part of criminal justice. This has often been the case in the not so distant past. Many of those earlier “trials by fire” were based on superstition, but today, we humans can and have designed tests that predict future behavior to a limited degree. Tests help determine whether someone is granted admission to a college, medical school, law school, or business school. Often the tests are only moderately predictive. For instance, the SAT test only correlates with college performance about .4 which means it predicts a mere 16% of the variance. From the standpoint of the individual, the score is not really much use. From the standpoint of the college administration however, 16% can make the test very worthwhile. It may well be the case that a super-intelligent computer system could do a much better job of constructing a test to determine whether a criminal is likely to commit other crimes.

One could imagine that if a computer can predict human behavior that well, then it should be able to “cure” any hardened criminal. However, even a super-intelligent computer will presumably not be able to defy the laws of physics. It will not be able to position the planet Jupiter safely in orbit a quarter million miles from earth in order to allow us to view a spectacular night sky. Since people form closed systems of thought, it may be equally impossible to cure everyone of criminal behavior, even for super-intelligent systems. People maintain false belief systems in the face of overwhelming evidence to the contrary. Indeed, the “trial by fire” that Brain faces is essentially a test to see whether he is or is not open to change based on evidence. Sadly, he is not.

Another theme of chapter 13 is that Brain’s trial by fire is televised. This is hardly far-fetched. Not only are (normal) trials televised today; so-called “reality TV shows” put people in all sorts of difficult situations. What might be perceived as a high level of cruelty in having people watch Brain fail his test is already present in much of what is available on commercial television. At least in the case of the hypothetical trial of Brain, there is a societal benefit in that it could reduce the chances for others to follow in Brain’s footsteps.

We only see hints of Brain’s crime, which apparently involves elder fraud. As people are capable of living longer, and as overwhelming greed has moved from the “sin” to the “virtue” column in modern American society, we can expect elder fraud to increase as well, at least for a time. With increasing surveillance, however, we might eventually see an end to it.

Of course, the name “Brain” was chosen because, in a sense, our own intelligence as a species — our own brain — is being put on trial. Are we capable of adapting quickly enough to prevent ourselves from being the cause of our own demise? And, just as the character Brain is too “closed” to make the necessary adaptations to stay alive, despite the evidence he is presented with, so too does humanity at large seem to be making the same kinds of mistakes over and over (prejudice, war, rabble-rousing, blaming others, assigning power to those with money, funneling the most money to those whose only “talent” consists of controlling the flow of money and power, etc.) We seem to have gained some degree of insight, but meanwhile, have developed numerous types of extremely effective weapons: biological, chemical, and atomic. Will super-intelligence be another such weapon? Or will it be instead, used in the service of preventing us from destroying each other?

Link to chapter 13 in this blog

Turing’s Nightmares (print version on Amazon)

Basically Unfair is Basically Unsafe

05 Tuesday Apr 2016

Posted by petersironwood in apocalypse, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, driverless cars, Robotics, the singularity, Turing

 

IMG_5572.JPG

In Chapter Eleven of Turing’s Nightmares, a family is attempting to escape from impending doom via a driverless car. The car operates by a set of complex rules, each of which seems quite reasonable in and of itself and under most circumstances. The net result however, is probably not quite what the designers envisioned. The underlying issue is not so much a problem with driverless cars, robotics or artificial intelligence. The underlying issue has more to do with the very tricky issue of separating problem from context. In designing any complex system, regardless of what technology is involved, people generally begin by taking some conditions as “given” and others as “things to be changed.” The complex problem is then separated into sub-problems. If each of the subproblems is well-solved, the implicit theory is that the overall problem will be solved as well. The tricky part is separating what we consider “problem” from “context” and separating the overall problem into relatively independent sub-problems.

Dave Snowden tells an interesting story from his days consulting for the National Water Service in the UK. The Water Service included in its employ engineers to fix problems and dispatchers who answered phones and dispatched engineers to fix those problems. Engineers were incented to solve problems while dispatchers were measured by how many calls they handled in a day. Most of the dispatchers were young but one of the older dispatchers was considerably slower than most. She only handled about half the number of calls she was “supposed to.” She was nearly fired. As it turned out, her husband was an engineer in the Water Service. She knew a lot and her phone calls ended up resulting in an engineer being dispatched about 1/1000 of the time while the “fast” dispatchers sent engineers to fix problems about 1/10 of the time. What was happening? Because the older employee knew a lot about the typical problems, she was actually solving many of them on the phone. She was saving her company a lot of money and was almost fired for it. Think about that. She was saving her company a lot of money and was almost fired for it.

In my dissertation, I compared the behavior of people solving a river-crossing problem to the behavior of the “General Problem Solver” — an early AI program developed by Shaw, Newell and Simon at Carnegie-Mellon University. One of the many differences was that people behave “opportunistically” compared with the General Problem Solver of the time. Although the original authors of GPS felt that its recursive nature was a feature, Quinlan and Hunt showed that there was a class of problems on which their non-recursive system (Fortran Deductive System) was superior.

Imagine, for example, that you wanted to read a new book (e.g., Turing’s Nightmare). In order to read the book, you will need to have the book so your sub-goal becomes to purchase the book; that is your goal. In order to meet that goal, you realize you will need to get $50 in cash. Now, getting $50 in cash becomes your goal. You decide that to meet that goal, you could volunteer to shovel the snow from your uncle’s driveway. On the way out the door, you mention your entire goal structure to your roommate because you need to borrow their car to drive to your uncle’s house. They say that they have already purchased the book and you are welcome to borrow it. The original GPS, at this point, would have solved the book reading problem by solving the book purchasing problem by solving the getting cash problem by going to your uncle’s house by borrowing your roommate’s car! You, on the other hand, like most individual human beings, would simply borrow your roommate’s copy and curl up in a nice warm easy chair to read the book. However, when people develop bureaucracies, whether business, academic, or governmental, these bureaucracies may well have spawned different departments, each with its own measures and goals. Such bureaucracies might well end up going through the whole chain in order to “solve the problem.”

Similarly, when groups of people design complex systems, the various parts of the system are generally designed and built by different groups of people. If these people are co-located, and if there is a high degree of trust, and if people are not micro-managed, and if there is time, space, and incentive for people to communicate even when it is not directly in the service of their own deadlines, the design group will tend to “do the right thing” and operate intelligently. To the extent, however, that companies have “cut the fat” and discourage “time-wasting” activities like socializing with co-workers and “saving money” by outsourcing huge chunks of the designing and building process, you will be lucky if the net result is as “intelligent” as the original General Problem Solving system.

Most readers will have experienced exactly this kind of bureaucratic nonsense when encountering a “good employee” who has no power or incentive to do anything but follow a set of rules that they have been warned to follow regardless of the actual result for the customer. At bottom then, the root cause of problems illustrated in chapter ten is not “Artificial Intelligence” or “Robotics” or “Driverless Cars.” The root issue is what might be called “Deep Greed.” The people at the very top of companies squeeze every “spare drop” of productivity from workers thus making choices that are globally intelligent nearly impossible due to a lack of knowledge and lack of incentive. This is combined with what might be called “Deep Hubris” — the idea that all contingencies have been accounted for and that there is no need for feedback, adaptation, or work-arounds.

Here is a simple example that I personally ran into, but readers will surely have many of their own examples. I was filling out an on-line form that asked me to list the universities and colleges I attended. Fair enough, but instead of having me type in the institutions, they designers used a pull-down list! There are somewhere between 4000 and 7500 post high-school institutions in the USA and around 50,000 world wide. The mere fact that the exact number is so hard to pin down should give designers pause. Naturally, for most UIs and most computer users, it is much faster to type in the name than scroll to it. Of course, the list keeps changing too. Moreover, there is ambiguity as to where an item should appear in the alphabetical list. For example, my institution, The University of Michigan, could conceivably be listed as “U of M”, “University of Michigan”, “Michigan”, “The University of Michigan”, or “U of Michigan.” As it turns out, it isn’t listed at all. That’s right. Over 43,000 students were enrolled last year at Michigan and it isn’t even on the list at least so far as I could determine in any way. That might not be so bad, but the form does not allow the user to type in anything. In other words, despite the fact that the category “colleges and universities” is ever-changing, a bit fuzzy, and suffers from naming ambiguity, the designers were so confident of their list being perfect that they saw no need for allowing users to communicate in any way that there was an error in the design. If one tries to communicate “out of band”, one is led to a FAQ page and ultimately a form to fill out. The form presumes that all errors are due to user errors and that all of these user errors are again from a small class of pre-defined errors! That’s right! You guessed it! The “report a problem” form again presumes that every problem that exists in the real world has already been anticipated by the designers. Sigh.

So, to me, the idea that Frank and Katie and Roger would end up as they did does not seem the least bit far-fetched. As I mentioned, the problem is not with “artificial intelligence.” The problem is not even that our society is structured as a hierarchy of greed. In the hierarchy of greed, everyone keeps their place because they are motivated to get just a little more by following the rules they are given from above and keeping everyone below them in line following their rules. It is not a system of involuntary servitude (for most) but a system of voluntary servitude. It seems to the people at each level that they can “do better” in terms of financial rewards or power or prestige by sifting just a little more from those below. To me, this can be likened to the game of Jenga™. In this game, there is a high stack of rectangular blocks. Players take turns removing blocks. At some point, of course, what is left of the tower collapses and one player loses. However, if our society collapses from deep greed combined with deep hubris, everyone loses.

Newell, A.; Shaw, J.C.; Simon, H.A. (1959). Report on a general problem-solving program. Proceedings of the International Conference on Information Processing. pp. 256–264.

J.R. Quinlan & E.B. Hunt (1968). A Formal Deductive Problem-Solving System, Journal of the ACM 10/1968; 15(4):625-646. DOI: 10.1145/321479.321487

Thomas, J.C. (1974). An analysis of behavior in the hobbits-orcs problem. Cognitive Psychology 6 , pp. 257-269.

Turing’s Nightmares

Turing’s Nightmares: Chapter 10

31 Thursday Mar 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, feelings, the singularity, Turing

snowfall

Chapter Ten of Turing’s Nightmares explores the role of emotions in human life and in the life of AI systems. The chapter mainly explores the issue of emotions from a practical standpoint. When it comes to human experience, one could also argue that, like human life itself, emotions are an end and not just the means to an end. From a human perspective, or at least this human’s perspective a life without any emotion would be a life impoverished. It is clearly difficult to know the conscious experience of other people, let alone animals, let alone an AI system. My own intuition is that what I feel emotionally is very close to what other people, apes, dogs, cats, and horses feel. I think we can all feel love, both romantic and platonic; that we all know grief; fear; anger; and peace as well as a sense of wonder.

As to the utility of emotions, I believe an AI system that interacts extremely well with humans will need to “understand” emotions and how they are expressed as well as how they can be hidden or faked as well as how they impact human perception, memory, and action. Whether a super-smart AI system needs emotions to be maximally effective is another question.

Consider emotions as a way of biasing perception, action, memory and decision making depending on the situation. If we feel angry, it can make us physically stronger and alter decision making. For the most part, decision making seems impaired, but it can make us feel at least temporarily less guilty about hurting someone or something else. There might be situations where that proves useful. However, since we tend to surround ourselves with people and things we actually like, there many occasions when anger produces counter-productive results.

There is no reason to presume that a super-intelligent AI system would need to copy the emotional spectrum of human beings. It may invent a much richer palette of emotions, perhaps as many as 100 or 10,000 that it finds useful in various situations. The best emotional predisposition for doing geometry proofs may be quite different from the best emotional predisposition for algebra proofs which again could be different from what works best for chess, go, or bridge.

Assuming that even for a very smart machine, it does not possess infinite resources, then it might be worthwhile for it to have different modes whether or not we call them “emotions.” Depending on the type of problem to be solved or situation at hand, not only should different information be input into a system but it should be processed differently as well.

For example, if any organism or machine is facing “life or death” situations, it makes sense to be able to react quickly and focus on information such as the location of potential prey, predators, and escape routes. It also makes sense to use well-tested methods rather than taking an unknown amount of time to invent something entirely new.

People often become depressed when there have been many changes in quick succession. This makes sense because many large changes mean that “retraining” may be necessary. So instead of rushing headlong to make decisions and take actions that may no longer be appropriate, watching what occurs in the new situations first is less prone to error. Similarly, society has developed rituals around large changes such as funerals, weddings, and baptisms. Because society designs these rituals, the individual facing changes does not need to invent something new when their evaluation functions have not yet been updated.

If super-intelligent machines of the future are to keep getting “better” they will have to be able to explore new possibilities. Just as with carbon-based life forms, intelligent machines will need to produce variety. Some varieties may be much more prone to emotional states that others. We could hope that super-intelligent machines might be more tolerant of a variety of emotional styles than people seem to be, but they may not.

The last theme introduced in chapter ten has been touched on before; viz., that values, whether introduced intentionally or unintentionally, will bias the direction of evolution of AI systems for many generations to come. If the people who build the first AI machines feel antipathy toward feelings and see no benefit to them from a practical standpoint, emotions may eventually disappear from AI systems. Does it matter whether we are killed by a feelingless machine, a hungry shark, or an angry bear?

————————————-

For a recent popular article about empathy and emotions in animals, see Scientific American special collector’s edition, “The Science of Dogs and Cats”, Fall, 2015.

Turing’s Nightmares

Turing’s Nightmares: Chapter 9

25 Friday Mar 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, Eden, the singularity, Turing, utopia

Why do we find stories of Eden or Utopia so intriguing? Some tend to think that humanity “fell” from an untroubled state of grace. Some believe that Utopia is still to come brought about by behavioral science (B.F. Skinner’s “Walden Two”) or technology (e.g., Kurzweil’s “The Singularity is Near”). Even American politics often echoes these themes. On the one hand, many conservatives tend to imagine America was a kind of Eden before big government and political correctness and fairness came into play (e.g., “Make American Great Again” used by Reagan as well as Trump; “Restore America Now” 2012 Ron Paul). On the other hand, many liberal slogans point toward a future Utopia (e.g., Gore – “Leadership for the New Millennium”; Obama – “Yes We Can”; Sanders – “A Future To Believe In”). Indeed, much of the underlying conservative vs. liberal “debate” centers around whether you mainly believe that America was close to paradise and we need to get back to it or whether you believe, however good America was, it can move much closer to a Utopian vision in the future.

In Chapter 9 of “Turing’s Nightmares”, the idea of Eden is brought in as a method of testing. In this case, we mainly see the story, not from God’s perspective or the human perspective, but from the perspective of a super-intelligent AI system. Why would such a system try to “create a world”? We could imagine that a super intelligent, super powerful being might be rather out of challenges of the type we humans generally have to face (at least in this interim period between the Eden of the past and the Utopia of the future). What to do? Well, why not explore deep philosophical questions such as good vs. evil and free will vs. determinism by creating worlds to explore these ideas? Debating such questions, at least by human beings, has not led to any universally accepted answers and we’ve been at it for thousands of years. It may be that a full scale experiment is the way to delve more deeply.

However “intelligent” and “knowledgeable” a super-smart computer system of the future might be, it will still most likely be the case that not everything about the future could be predictable. In order to simulate the universe in detail, the computer would have to be as extensive as the universe. Of course, it could be that many possible states “collapse” due to reasons of symmetry or that a much smaller number of “rules” could predict things. There is no way to tell at this point. As we now see the world, even determining how to play a “perfect” game of chess by checking all possible moves would require a “more than universe-sized” computer. It could be the case that a fairly small set of (as yet undetermined) rules could produce the same results. And, maybe that would be true about biological and social evolution. In the wonderful science fiction series, The Foundation Series, by Isaac Asimov, Hari Seldon develops a way to predict the social and political evolution of humanity from a series of equations. Although he cannot predict individual behavior, the collective behavior is predictable. In Chapter 9, our AI system believes that it can predict human outcomes but still has enough doubt that it needs to test out its hypotheses.

There is a very serious and as yet unknown question about our own future implicit in Chapter 9. It could be the case that we humans are fundamentally flawed by our genetic heritage. Some branches of primates behave in a very competitive and nasty fashion. It might well be that our genome will prevent us from stopping global climate change or indeed that we are doomed to over-populate and over-pollute the world or that we will eventually find “world leaders” who will pull nuclear triggers on an atomic armageddon. It might well be that our “intelligence” and even the intelligence of AI systems that start from the seeds of our thoughts are on a local maximum. Maybe dolphins, or sea turtles would be a better starting point. But maybe, just maybe, we can see our way through to overcome whatever mindlessly selfish predispositions we might have to create a greener world that is peaceful, prosperous and fair. Maybe.IMG_2870

Turing’s Nightmares

Walden Two

The Singularity Is Near

Foundation Series

Turing’s Nightmares: Eight

20 Sunday Mar 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ 2 Comments

Tags

AI, Artificial Intelligence, cognitive computing, collaboration, cooperation, the singularity, Turing

OLYMPUS DIGITAL CAMERA

Workshop on Human Computer Interaction for International Development

In chapter 8 of Turing’s Nightmares, I portray a quite different path to ultra-intelligence. In this scenario, people have begun to concentrate their energy, not on building a purely artificial intelligence; rather they have explored the science of large scale collaboration. In this way, referred to by Doug Engelbart among others as Intelligence Augmentation, the “super-intelligence” comes from people connecting.

It could be argued, that, in real life, we have already achieved the singularity. The human race has been pursuing “The Singularity” ever since we began to communicate with language. Once our common genetic heritage reached a certain point, our cultural evolution has far out-stripped our genetic evolution. The cleverest, most brilliant person ever born would still not be able to learn much in their own lifetime compared with what they can learn from parents, siblings, family, school, society, reading and so on.

One problem with our historical approach to communication is that it evolved for many years among a small group of people who shared goals and experiences. Each small group constituted an “in-group” but relations with other groups posed more problems. The genetic evidence, however, has become clear that even very long ago, humans not only met but mated with other varieties of humans proving that some communication is possible even among very different tribes and cultures.

More recently, we humans started traveling long distances and trading goods, services, and ideas with other cultures. For example, the brilliance of Archimedes notwithstanding, the idea of “zero” was imported into European culture from Arab culture. The Rosetta Stone illustrates that even thousands of years ago, people began to see the advantages of being able to translate among languages. In fact, modern English contains phrases even today that illustrate that the Norman conquerers found it useful to communicate with the conquered. For example, the phrase, “last will and testament” was traditionally used in law because it contains both the word “will” with Germanic/Saxon origins and the word “testament” which has origins in Latin.

Automatic translation across languages has made great strides. Although not so accurate as human translation, it has reached the point where the essence of many straightforward communications can be usefully carried out by machine. The advent of the Internet, the web, and, more recently google has certainly enhanced human-human communication. It is worth noting that the tremendous value of google arises only a little through having an excellent search engine but much more though the billions of transactions of other human beings. People are already exploring and using MOOCs, on-line gaming, e-mail and many other important electronically mediated tools.

Equally importantly, we are learning more and more about how to collaborate effectively both remotely and face to face, both synchronously and asynchronously. Others continue to improve existing interfaces to computing resources and inventing others. Current research topics include how to communicate more effectively across cultural divides; how to have more coherent conversations when there are important differences in viewpoint or political orientation. All of these suggest that as an alternative or at least an adjunct to making purely separate AI systems smarter, we can also use AI to help people communicate more effectively with each other and at scale. Some of the many investigators in these areas include Wendy Kellogg, Loren Terveen, Joe Konstan, Travis Kriplean, Sherry Turkle, Kate Starbird, Scott Robertson, Eunice Sari, Amy Bruckman, Judy Olson, and Gary Olson. There are several important conferences in the area including European Conference on Computer Supported Cooperative Work, and Conference on Computer Supported Cooperative Work, and Communities and Technology. It does not seem at all far-fetched that we can collectively learn, in the next few decades how to take international collaboration to the next level and from there, we may well have reached “The Singularity.”

————————————-

For further reading, see: Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.

Thomas, J. C. (2012). Patterns for emergent global intelligence. In Creativity and Rationale: Enhancing Human Experience By Design J. Carroll (Ed.), New York: Springer.

Thomas, J. C., Kellogg, W.A., and Erickson, T. (2001). The Knowledge Management puzzle: Human and social factors in knowledge management. IBM Systems Journal, 40(4), 863-884.

Thomas, J. C. (2001). An HCI Agenda for the Next Millennium: Emergent Global Intelligence. In R. Earnshaw, R. Guedj, A. van Dam, and J. Vince (Eds.), Frontiers of human-centered computing, online communities, and virtual environments. London: Springer-Verlag.

Thomas, J.C. (2016). Turing’s Nightmares. Available on Amazon. http://tinyurl.com/hz6dg2

Turing’s Nightmares: Seven

13 Sunday Mar 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, competition, cooperation, ethics, the singularity, Turing

Axes to Grind.

finalpanel1

Why the obsession with building a smarter machine? Of course, there are particular areas where being “smarter” really means being able to come up with more efficient solutions. Better logistics means you can deliver items to more people more quickly with fewer mistakes and with a lower carbon footprint. That seems good. Building a better Chess player or a better Go player might have small practical benefit, but it provides a nice objective benchmark for developing methods that are useful in other domains as well. But is smarter the only goal of artificial intelligence?

What would or could it mean to build a more “ethical” machine? Can a machine even have ethics? What about building a nicer machine or a wiser machine or a more enlightened one? These are all related concepts but somewhat different. A wiser machine, to take one example, might be a system that not only solves problems that are given to it more quickly. It might also mean that it looks for different ways to formulate the problem; it looks for the “question behind the question” or even looks for problems. Problem formulation and problem finding are two essential skills that are seldom even taught in schools for humans. What about the prospect of machines that do this? If its intelligence is very different from ours, it may seek out, formulate, and solve problems that are hard for us to fathom.

For example, outside my window is a hummingbird who appears to be searching the stone pine for something. It is completely unclear to me what he is searching for. There are plenty of flowers that the hummingbirds like and many are in bloom right now. Surely they have no trouble finding these. Recall that a hummingbird has an incredibly fast metabolism and needs to spend a lot of energy finding food. Yet, this one spent five minutes unsuccessfully scanning the stone pine for … ? Dead straw to build a nest? A mate? A place to hide? A very wise machine with freedom to choose problems may well pick problems to solve for which we cannot divine the motivation. Then what?

In this chapter, one of the major programmers decides to “insure” that the AI system has the motivation and means to protect itself. Protection. Isn’t this the major and main rationalization for most of the evil and aggression in the world? Perhaps a super intelligent machine would be able to manipulate us into making sure it was protected. It might not need violence. On the other hand, from the machine’s perspective, it might be a lot simpler to use violence and move on to more important items on its agenda.

This chapter also raises issues about the relationship between intelligence and ethics. Are intelligent people, even on average, more ethical? Intelligence certainly allows people to make more elaborate rationalizations for their unethical behavior. But does it correlate with good or evil? Lack of intelligence or education may sometimes lead people to do harmful things unknowingly. But lots of intelligence and education may sometimes lead people to do harmful things knowingly — but with an excellent rationalization. Is that better?

Even highly intelligent people may yet have significant blind spots and errors in logic. Would we expect that highly intelligent machines would have none? In the scenario in chapter seven, the presumably intelligent John makes two egregious and overt errors in logic. First, he says that if we don’t know how to do something, it’s a meaningless goal. Second, he claims (essentially) that if empathy is not sufficient for ethical behavior, then it cannot be part of ethical behavior. Both are logically flawed positions. But the third and most telling “error” John is making is implicit — that he is not trying to dialogue with Don to solve some thorny problems. Rather, he is using his “intelligence” to try to win the argument. John already has his mind made up that intelligence is the ultimate goal and he has no intention of jointly revisiting this goal with his colleague. Because, at least in the US, we live in a hyper-competitive society where even dancing and cooking and dating have been turned into competitive sports, most people use their intelligence to win better, not to cooperate better. 

If humanity can learn to cooperate better, perhaps with the help of intelligent computer agents, we can probably solve most of the most pressing problems we have even without super-intelligent machines. Will this happen? I don’t know. Could this happen? Yes. Unfortunately, Roger is not on board with that program toward better cooperation and in this scenario, he has apparently ensured the AI’s capacity for “self-preservation through violent action” without consulting his colleagues ahead of time. We can speculate that he was afraid that they might try to prevent him from doing so either by talking him out of it or appealing to a higher authority. But Roger imagined he “knew better” and only told them when it was a fait accompli. So it goes.

Turing’s Nightmares

Turing’s Nightmares: Six

10 Thursday Mar 2016

Posted by petersironwood in sports, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, sports, Turing

volleyballvictory

Human Beings are Interested in Human Limits.

A Google AI system just won its second victory over the human Go champion. Does this mean that people will lose interest in Go? I don’t think so. It may eventually mean that human players will learn faster and that top-level human play will increase. Nor, will robot athletes supplant human athletes any time soon.

Athletics provides an excellent way for people to get and stay fit, become part of a community, and fight depression and anxiety. Watching humans vie in athletic endeavors helps us understand the limits of what people can do. This is something that our genetic endowment has wisely made fascinating. To a lesser extent, we are also interested in seeing how fast a horse can run, or how fast a hawk can dive or how complex a routine a dog can learn.

In Chapter 6 of “Turing’s Nightmares” I briefly explore a world where robotic competitors have replaced human ones. In this hypothetical world, the super-intelligent computers also find that sports is an excellent venue for learning more about the world. And, so it is! In “The Winning Weekend Warrior”, I provide many examples of how strategies and tactics useful in the sports world are also useful in business and in life. (There are also some important exceptions that are worth noting. In sports, you play within the rules. In life, you can play with some of the rules.)

Chapter 6 also brings up two controversial points that ethicists and sports enthusiasts should be discussing now. First, sensors are becoming so small, powerful, accurate, and lightweight that is possible to embed them in virtually any piece of sports equipment(e.g., tennis racquets). Few people would call it unethical to include such sensors as training devices. However, very soon, these might also provide useful information during play. What about that? Suppose that you could wear a device that not only enhanced your sensory abilities but also your motor abilities? To some extent, the design of golf clubs and tennis racquets and swimsuits are already doing this. Is there a limit to what would or should be tolerated? Should any device be banned? What about corrective lenses? What about sunglasses? Should all athletes have to compete nude? What about athletes who have to take “performance enhancing” drugs just to stay healthy? Sharapova’s recent case is just one. What about the athlete of the future who has undergone stem cell therapy to regrow a torn muscle or ligament? Suppose a major league baseball pitcher tears a tendon and it is replaced with a synthetic tendon that allows a faster fast ball?

With the ever-growing power of computers and the collection of more and more data, big data analytics makes it possible for the computer to detect patterns of play that a human player or coach would be unlikely to perceive. Suppose a computer system is able to detect reliable “cues” that tip off what pitch a pitcher is likely to throw or whether a tennis player is about to hit down the tee or out wide? Novak Djokovic and Ted Williams were born with exceptional visual acuity. This means that they can pick out small visual details more quickly than their opponents and react to a serve or curve more quickly. But it also means that they are more likely to pick up subtle tip-offs in their opponents motion that give away their intentions ahead of time. Would we object if a computer program analyzed thousands of serves by Roger Federer or Andy Murray in order to detect patterns of tip-offs and then that information was used to help train Djokovic to learn to “read” the service motions of his opponents? Of course, this does not just apply to tennis. It applies to reading a football play option, a basketball pick, the signals of baseline coaches, and so on.

Instead of teaching Novak Djokovic these patterns ahead of time, suppose he were to have a device implanted in his back that received radio signals from a supercomputer able to “read” where the serve were going a split second ahead of time and it was this signal that allowed Novak to anticipate better?

I do not know the “correct” ethical answer for all of these dilemmas. To me, it is most important to be open and honest about what is happening. So, if Lance Armstrong wants to use performance enhancing drugs, perhaps that is okay if and only if everyone else in the race knows that and has the opportunity to take the same drugs and if everyone watching knows it as well. Similarly, although I would prefer that tennis players only use IT for training, I would not be dead set against real time aids if the public knows. I suspect that most fans (like me) would prefer their athletes “un-enhanced” by drugs or electronics. Personally, I don’t have an issue with using any medical technology to enhance the healing process. How do others feel? And what about athletes who “need” something like asthma medication in order to breathe but it has a side-effect of enhancing performance?

Would the advent of robotic tennis players, baseball players or football players reduce our enjoyment of watching people in these sports? I think it might be interesting to watch robots in these sports for a time, but it would not be interesting for a lifetime. Only human athletes would provide on-going interest. What do you think?

Readers of this blog may also enjoy “Turing’s Nightmares” and “The Winning Weekend Warrior.” John Thomas’s author page on Amazon

Turing’s Nightmares: Chapter Five

06 Sunday Mar 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, cognitive computing, Personal Assistant, the singularity, Turing

runtriathalon

 

An Ounce of Prevention: Chapter 5 of Turing’s Nightmares

Hopefully, readers will realize that I am not against artificial intelligence; nor do I think the outcomes of increased intelligence are all bad. Indeed, medicine offers a large domain where better artificial intelligence is likely to help us stay healthier longer. IBM’s Watson is already “digesting” the vast and ever-growing medical literature. As investigators discover more and more about what causes health and disease, we will also need to keep track of more and more variables about an individual in order to provide optimal care. But more data points also means it will become harder for a time-pressed doctor or nurse to note and remember everything about a patient. Certainly, personal assistants can help medical personnel avoid bad drug interactions, keep track of history, and “perceive” trends and relationships in complex data more quickly than people are likely to. In addition, in the not too distant future, we can imagine AI programs finding complex relationships and “invent” potential treatments.

Not only medicine, but health provides a number of opportunities for technology to help. People often find it tricky to “force themselves” to follow the rules of health that they know to be good such as getting enough exercise. Fit Bit and LoseIt and similar IT apps already help track people’s habits and for many, this really helps them stay fit. As computers become more aware of more and more of our personal history, they can potentially find more personalized ways to motivate us to do what is in our own best interest.

In Chapter 5, we find that Jack’s own daughter, Sally is unable to persuade Jack to see a doctor. The family’s PA (personal assistant), however, succeeds. It does this by using personal information about Jack’s history in order to engage him emotionally, not just intellectually. We have to assume that the personal assistant has either inferred or knows from first principles that Jack loves his daughter and the PA also uses that fact to help persuade Jack.

It is worth noting that the PA in this scenario is not at all arrogant. Quite the contrary, the PA acts the part of a servant and professes to still have a lot to learn about human behavior. I am reminded of Adam’s “servant” Lee in John Steinbeck’s East of Eden. Lee uses his position as “servant” to do what is best for the household. It’s fairly clear to the reader that, in many ways, Lee is in charge though it may not be obvious to Adam.

In some ways, having an AI system that is neither “clueless” as most systems are today nor “arrogant” as we might imagine a super-intelligent system to be (and as the systems in chapters 2 and 3 were), but instead feigning deference and ignorance in order to manipulate people could be the scariest stance for such a system to take. We humans do not like being “manipulated” by others, even when it for our own “good.” How would we feel about a deferential personal assistant who “tricks us” into doing things for our own benefit? What if they could keep us from over-eating, eating candy, smoking cigarettes, etc.? Would we be happy to have such a good “friend” or would we instead attempt to misdirect it, destroy it, or ignore it? Maybe we would be happier with just having something that presented the “facts” to us in a neutral way so that we would be free to make our own good (or bad) decision. Or would we prefer a PA to “keep us on track” even while pretending that we are in charge?

Turing’s Nightmares: Chapter Four

01 Tuesday Mar 2016

Posted by petersironwood in driverless cars, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, the singularity, Turing, virtual reality

Considerations of “Turing’s Nightmare’s” Chapter Four: Ceci N’est Pas Une Pipe.

 

pipe

 

In this chapter, we consider the interplay of four themes. First, and most centrally, is the issue of what constitutes “reality.” The second theme is that what “counts” as “reality” or is seen as reality may well differ from generation to generation. The third theme is that AI systems may be inclined to warp our sense of reality, not simply to be “mean” or “take over the world” but to help prevent ecological disaster. Finally, the fourth theme is that truly super-intelligent AI systems might not appear so at all; that is, they may find it more effective to take a demure tone as the AI embedded in the car does in this scenario.

There is no doubt that, artificial intelligence and virtual reality aside, what people perceive is greatly influenced by their symbol systems, their culture and their motivational schemes. Babies as young as six weeks are already apparently less able to make discriminations of differences within what their native language considers a phonemic category than they were at birth. In our culture, we largely come to believe that there is a “right answer” to questions. Suppose an animal is repeatedly presented with a three-choice problem, let’s say among A, B, and C. A pays off randomly with a reward 1/3 of the time while B and C never pay off. A fish, a rat, or a very young child will quickly only choose A thus maximizing their rewards. However, a child who has been to school (or an adult) will spend considerably more time trying to find “the rule” that allows them to win every time. Eventually, most will “give up” and choose only A, but in the meantime, they do far worse than a fish, a rat, or a baby. This is not to say that the conceptual frameworks that color our perceptions and reactions are always a bad thing. They are not. There are obvious advantages to learning language and categories. But our interpretations of events are highly filtered and distorted. Hopefully, we realize that that is so, but often we tend to forget.

Similarly, if you ask the sports fans for two opposing teams to make a close call; for instance, as to whether there was pass interference in American football, or whether a tennis ball near the line was in or out, you tend to find that people’s answers are biased toward their team’s interest even when their calls make no influence on the outcome.

Now consider that we keep striving toward more and more fidelity and completeness in our entertainment systems. Silent movies were replaced by “talkies.” Black and white movies and television were replaced by color. Some TV screens have gotten bigger. There are more 3-D movies and more entertainment is in high definition even as sound reproduction has moved from monaural to stereo to surround sound. Research continues to allow the reproduction of smell, taste, tactile, and kinesthetic sensations. Virtual reality systems have become smaller and less expensive. There is no reason to suppose these trends will lessen any time soon. There are many advantages to using Virtual Reality in education (e.g., Stuart, R., & Thomas, J. C. (1991). The implications of education in cyberspace. Multimedia Review, 2(2), 17-27; Merchant, Z., Goetz, E, Cifuentes, L., Keeney-Kennicutt, W., and Davis, T. Effectiveness of virtual reality based instruction on student’s learning outcomes in K-12 and higher education: A meta-analysis, Computers and Education, 70(2014),29-40). As these applications become more realistic and widespread, do they influence the perceptions of what even “counts” as reality?

The answer to this may well depend on the life trajectory of individuals and particularly on how early in their lives they are introduced to virtual reality and augmented reality. I was born in a largely “analogue” age. In that world, it was often quite important to “read the manual” before trying to operate machinery. A single mistake could destroy the machine or cause injury. There is no way to “reboot” or “undo” if you cut a tree down wrongly so it falls on your house. How will future generations conceptualize “reality” versus “augmented reality” versus “virtual reality”?

Today, people often believe it is important for high school students to physically visit various college campuses before making a decision about where to do. There is no doubt that this is expensive in terms of time, money, and the use of fossil fuels. Yet, there is a sense that being physically present allows the student to make a better decision. Most companies similarly only hire candidates after face to face interviews even though there is no evidence that this adds to the predictive capability of companies with respect to who will be a productive employee. More and more such interviewing, however, is being done remotely. It might well be that a “super-intelligent” system might arrange for people who wanted to visit someplace physically to visit it virtually instead while making it seem as much as possible as though the visit were “real.” After all, left to their own devices, people seem to be making painfully slow (and too slow) progress toward reducing their carbon footprints. AI systems might alter this trajectory to save humanity, to save themselves, or both.

In some scenarios in Turing’s Nightmare the AI system is quite surly and arrogant. But in this scenario, the AI system takes on the demeanor of a humble servant. Yet it is clear (at least to the author!) who really holds the power. This particular AI embodiment sees no necessity of appearing to be in charge. It is enough to make it so and manipulate the “sense of reality” that the humans have.

Turing’s Nightmares

Turing’s Nightmares: Chapter Three

27 Saturday Feb 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ 2 Comments

Tags

AI, Artificial Intelligence, cognitive computing, ethics, Robotics, the singularity, Turing

In chapter three of Turing’s Nightmares, entitled, “Thanks goodness the computer understands us!,” there are at least four major issues touched on. These are: 1) the value of autonomous robotic entities for improved intelligence, 2) the value of having multiple and diverse AI systems living somewhat different lives and interacting with each other for improving intelligence, 3) the apparent dilemma that if we make truly super-intelligent machines, we may no longer be able to follow their lines of thought, and 4) a truly super-intelligent system will have to rely to some extent on inferences from many real-life examples to induce principles of conduct and not simply rely on having everything specifically programmed. Let us examine these one by one.

There are many practical reasons that autonomous robots can be useful. In some practical applications such as vacuuming a floor, a minimal amount of intelligence is all that is needed to do the job. It would be wasteful and unnecessary to have such devices communicating information back to some central decision making computer and then receiving commands. In some cases, the latency of the communication itself would impair the efficiency. A “personal assistant” robot could learn the behavioral and voice patterns of a person more easily than if we were to develop speaker independent speech recognition and preferences. The list of practical advantages goes on, but what is presumed in this chapter is that there are theoretical advantages to having actual robotic systems that sense and act in the real world in terms of moving us closer to “The Singularity.” This theme is explored again, in somewhat more depth, in chapter 18.

I would not personally argue that having an entity that moves through space and perceives is necessary to having any intelligence, or for that matter, to having any consciousness. However, it seems quite natural to believe that the quality of intelligence and consciousness are influenced by what is possible for the entity to perceive and to do. As human beings, our consciousness is largely influenced by our social milieu. If a person is born or becomes paralyzed later in life, this does not necessarily greatly influence the quality of their intelligence or consciousness because the concepts of the social system in which they exist were founded historically by people that included people who were mobile and could perceive.

Imagine instead a race of beings who could not move through space or perceive any specific senses that we do. Instead, imagine that they were quite literally a Turing Machine. They might well be capable of executing a complex sequential program. And, given enough time, that program might produce some interesting results. But if it were conscious at all, the quality of its consciousness would be quite different from ours. Could such a machine ever become capable of programming a still more intelligent machine?

What we do know is that in the case of human beings and other vertebrates, the proper development of the visual system in the young, as well as the adaptation to changes (e.g., having glasses that displace or invert images) seems to depend on being “in control” although that control, at least for people, can be indirect. In one ingenious experiment (Held, R. and Hein, A., (1963) Movement produced stimulation in the development of visually guided behavior, Journal of Comparative and Physiological Psychology, 56 (5), 872-876), two kittens were connected on a pivoted gondola and one kitten was able to “walk” through a visual field while the other was passively moved through that visual field. The kitten who was able to walk developed normally while the other one did not. Similarly, simply “watching” TV passively will not do much to teach kids language (Kuhl PK. 2004. Early language acquisition: Cracking the speech code. Nature Neuroscience 5: 831-843; Kuhl PK, Tsao FM, and Liu HM. 2003. Foreign-language experience in infancy: effects of short-term exposure and social interaction on phonetic learning. Proc Natl Acad Sci U S A. 100(15):9096-101). Of course, none of that “proves” that robotics is necessary for “The Singularity,” but it is suggestive.

Would there be advantages to having several different robots programmed differently and living in somewhat different environments be able to communicate with each other in order to reach another level of intelligence? I don’t think we know. But diversity seems an advantage when it comes to genetic evolution and when it comes to people comprising teams. (Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.)

The third issue raised in this scenario is a very real dilemma. If we “require” that we “keep tabs” on developing intelligence by making them (or it) report the “design rationale” for every improvement or design change on the path to “The Singularity”, we are going to slow down progress considerably. On the other hand, if we do not “keep tabs”, then very soon, we will have no real idea what they are up to! An analogy might be the first “proof” that you only need four colors to color any planar map. There were so many cases (nearly 2000) that this proof made no sense to most people. Even the algebraic topologists who do understand it take much longer to follow the reasoning than the computer does to produce it. (Although simpler proofs now exist, they all rely on computers and take a long time for humans to verify). So, even if we ultimately came to understand the design rationale for successive versions of hyper-intelligence, it would be way too late to do anything about it (to “pull the plug”). Of course, it isn’t just speed. As systems become more intelligent, they may well develop representational schemes that are both different and better (at least for them) than any that we have developed. This will also tend to make it impossible for people to “track” what they are doing in anything like real time.

Finally, as in the case of Jeopardy, the advances along the trajectory of “The Singularity” will require that the system “read” and infer rules and heuristics based on examples. What will such systems infer about our morality? They may, of course, run across many examples of people preaching the “Golden Rule.” But how does the “Golden Rule” play out in reality? Many, including me, believe it needs to be modified as “Do unto others as you would have them do to you if you were them and in their place.” Preferences differ as do abilities. I might well want someone at my ability level to play tennis against me by pushing me around the court to the best of their ability. But does this mean I should always do that to others? Maybe they have a heart condition. Or, maybe they are just not into exercise. The examples are endless. Famously, guys often imagine that they would like women to comment favorably on the guy’s physical appearance. Does that make it right for men to make such comments to women? Some people like their steaks rare. If I like my steak rare, does that mean I should prepare it that way for everyone else? The Golden Rule is just one example. Generally speaking, in order for a computer to operate in a way we would consider ethical, we would probably need it to see how people treat each other ethically in practice, not just “memorize” some rules. Unfortunately, the lessons of history that the singularity-bound computer would infer might not be very “ethical” after all. We humans often have a history of destroying other entire species when it is convenient, or sometimes, just for the hell of it. Why would we expect a super-intelligent computer system to treat us any differently?

Turing’s Nightmares

IMG_3071

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • dogs
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 661 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...