• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Category Archives: The Singularity

Turing’s Nightmares: Chapter 15

16 Monday May 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, the singularity, Turing, Tutoring

Tutoring Intelligent Systems.

MikeandStatue

Learning by modeling; in this case by modeling something in the real world.

Of course, the title of the chapter is a take off on “Intelligent Tutoring Systems.” John Anderson of CMU developed (at least) a LISP tutor and a geometry tutor. In these systems, the computer is able to infer a “model” of the state of the student’s knowledge and then give instruction and examples that are geared toward the specific gaps or misconceptions that that particular student has. Individual human tutors can be much more effective than classroom instruction and John’s tutor’s were also better than human instruction. At the AI Lab at NYNEX, we worked for a time with John Anderson to develop a COBOL tutor. The tutoring system, called DIME, included a hierarchy of approaches. In addition to an “intelligent tutor”, there was a way for students to communicate with each other and to have a synchronous or asynchronous video chat with a human instructor. (This was described at CHI ’94 and available in the Proceedings; Radlinski, B., Atwood, M., and Villano, M., DIME: Distributed Intelligent Multimedia Education, Proceeding of CHI ’94 Conference Companion on Human Factors in Computing Systems,Pages 15-16 ACM New York, NY, USA ©1994).

The name “Alan” is used in the chapter to reflect some early work by Alan Collins, then at Bolt, Beranek and Newman, who studied and analyzed the dialogues of human tutors tutoring their tutees. It seems as though many AI systems either take the approach of trying to have human experts encode knowledge rather directly or expose them to many examples and let the systems learn on their own. Human beings often learn by being exposed to examples and having a guide, tutor, or coach help them focus, provide modeling, and chose the examples they are exposed to. One could think of IBM’s Watson for Jeopardy as something of a mixed model. Much of the learning was due to the vast texts that were read in and to being exposed to many Jeopardy game questions. But the team also provided a kind of guidance about how to fix problems as they were uncovered.

In chapter 15 of Turing’s Nightmares, we observe an AI system that seems at once brilliant and childish. The extrapolation from what the tutor actually said, presumably to encourage “Sing” to consider other possibilities about John and Alan was put together with another hint about the implications of being differently abled into the idea that there was no necessity for the AI system to limit itself to “human” emotions. Instead, the AI system “designs” emotional states in order to solve problems more effectively and efficiently. Indeed, in the example given, the AI system at first estimates it will take a long time to solve an international crisis. But once the Sing realizes that he can use a tailored set of emotional states for himself and for the humans he needs to communicate with, the problem becomes much simpler and quicker.

Indeed, it does sometimes feel as though people get stuck in some morass of habitual prejudices, in-group narratives, blame-casting, name-calling, etc. and are unable to think their way from their front door to the end of the block. Logically, it seems clear that war never benefits either “side” much (although to be sure, some powerful interests within each side might stand to gain power, money, etc.). One could hope that a really smart AI system might really help people see their way clear to find other solutions to problems.

.

The story ends with a refrain paraphrased from the TV series “West Wing” — “What comes next?” is meant to be reminiscent of “What’s Next?” which President Bartlett uses to focus attention on the next problem. “What comes next?” is also a phrase used in improv theater games; indeed, it is the name of an improv game used to gather suggestions from the audience about how to move the action along. In the context of the chapter, it is meant to convey that the Sing feels no need to bask in the glory of having avoided a war. Instead, it’s on to the next challenge or the next thing to learn. The phrase is also meant to invite the reader to think about what might come next after AI systems are able both to understand and utilize human emotion but also to invent their own emotional states on the fly based on the nature of the problem at hand. Indeed, what comes next?

Turing’s Nightmares: Chapter 14

02 Monday May 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, pets, the singularity, Turing

IMG_6576

 

Dear reader: Spoiler alert: before reading this blog post, you may want to read the associated chapter. You can buy the physical book, Turing’s Nightmares, at this link:

http://tinyurl.com/hz6dg2d

An earlier version of the chapter discussed below can be found at this link:

https://petersironwood.wordpress.com/2015/10/

One of the issues raised by chapter 14 of Turing’s Nightmares is that the scenario presumes that, even in the post-singularity future, there will still be a need for government. In particular, the future envisions individuals as well as a collective. Indeed, the goals of the “collective” will remain somewhat different from the goals of various individuals. Indeed, an argument can be made that the need for complex governmental processes and structures will increase with hyper-intelligence. But that argument will be saved for another time.

This scenario further assumes that advanced AI systems will have emotions and emotional attachments to other complex systems. What is the benefit of having emotional attachments? Some people may feel that emotional attachments are as outdated as the appendix; perhaps they had some function when humans lived in small tribes but now they cause as much difficulty as they confer an advantage. Even if you believe that emotional attachments are great for humans, you still might be puzzled why it could be advantageous for an AI system to have any.

When it comes to people, they vary a lot in their capabilities, habits, etc.. So, one reason emotional attachments “make sense” is to prefer and act in the interest of people who have a range of useful and complementary abilities and habitual behaviors. Wouldn’t you naturally start to like someone who has similar interests, other things being equal? Moreover, as you work with someone else toward a common goal, you begin to understand and learn how to work together better. You learn to trust each other and communicate in short-hand. If you become disconnected from such a person, it can be disconcerting for all sorts of reasons. But exactly the same could hold true for an autonomous agent with artificial intelligence. There could be reasons for having not one ubiquitous type of robot but for having millions of different kinds. Some of these would work well together and having them “bond” and differentially prefer their mutual proximity and interaction.

Humans, of course, also make emotional attachments, sometimes very deep, with animals. Most commonly, people form bonds with cats, dogs, and horses, but people have had a huge variety of pets including birds, turtles, snakes, ferrets, mice, rabbits and even tarantula spiders. What’s up with that? The discussion above about emotional attachment was intentionally “forced” and “cold”, because human attachments cannot be well explained in utilitarian terms. People love others who have no possible way to offer back any value other than their love in return.

In some cases, pets do have some utilitarian value such as catching mice, barking at intruders, or pulling hay wagons. But overwhelmingly, people love their pets because they love their pets! If asked, they may say because they are “cute” or “cuddly” but this doesn’t really answer the question as to why people love pets. According to a review by John Archer published in the 1997 July issue of Human Behavior, “These mechanisms can, in some circumstances, cause pet owners to derive more satisfaction from their pet relationship than those with humans, because they supply a type of unconditional relationship that is usually absent from those with other human beings.”

However, there are also other hypotheses; e.g., Biophilia (1986) Edward O. Wilson

http://www.amazon.com/Biophilia-Edward-Wilson/dp/0674074424 

suggests that during early hominid history, there was a distinct survival advantage to observing and remaining close to other animals living in nature. Would it make more sense to gravitate toward a habitat filled with life…or one utterly devoid of it? While humans and other animals generally want to move toward similar things like fresh water, a food supply, cover, reasonable temperatures, etc. and avoid other things such as dangerous places, temperature extremes etc. this might explain why people like lush and living environments but probably does not explain, in itself, why we actually love our pets.

Perhaps one among many possible reasons is that pets reflect aspects of our most basic natures. In civilization, these aspects are often hidden by social conventions. In effect, we can actually learn about how we ourselves are by observing and interacting with our pets. Among the various reasons why we love our pets, this strikes me as the most likely one to hold true as well for super-AI systems. Of course, they may also like cats and dogs for the same reason, but in the same way that most of us prefer cats and dogs over turtles and spiders because of the complexity and similarity of mammalian behavior, we can imagine that post-singularity AI systems might prefer human pets because we would be more complex and probably, at least initially, share many of the values, prejudices and interests of the AI systems since their initial programming would inevitably reflect humans.

Another premise of chapter 14 is that even with super-intelligent systems, resources will not be infinite. Many dystopian and utopian science fiction works alike seem to assume that in the future space travel, e.g., will be dirt cheap. That might happen. Ignoring any economic scarcity certainly makes writing more convenient. Realistically though, I see no reason why resources will be essentially infinite; that is, so universally cheap that there will no longer be any contention for them. It’s conceivable that some entirely new physical properties of the universe might be discovered by super-intelligent beings so that this will be the new reality. But it is also possible that “super-intelligent beings” might be even more inclined to over-use the resources of the planet than we humans are and that contention for resources will be even more fierce.

Increasing greediness seems at least an equally likely possibility as the alternative; viz., that while it might be true that as humans gained more and more power, they became greedier and greedier and used up more and more resources but only until that magic moment when machines were smarter than people and that at that point, these machines suddenly became interested in actually exhibiting sustainable behavior. Maybe, but why?

Any way, it’s getting late and past time to feed the six cats.

Interested readers who can may want to tune into a podcast tonight, Monday, May 2nd at 7pm PST using the link below. I will be interviewed about robotics, artificial intelligence and human computer interaction.

https://blab.im/nick-rishwain-roboticslive-ep-1-human-computer-interactions-w-john-charles-truthtablejc

Chapter 13: Turing’s Nightmares

17 Sunday Apr 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ 2 Comments

Tags

AI, Artificial Intelligence, cognitive computing, crime and punishment, ethics, the singularity

CRIME AND PUNISHMENT

PicturesfromiPhone2 033

Chapter 13 of Turing’s Nightmares concerns itself with issues of crime and punishment. Our current system of criminal justice has evolved over thousands of years. Like everything else about modern life, it is based on a set of assumptions. While accurate DNA testing (and other modern technologies) have profoundly impacted the criminal justice system, super-intelligence and ubiquitous sensors and computing could well have even more profound impacts.

We often talk of punishment as being what is “deserved” for the crime. But we cannot change the past. It seems highly unlikely that even a super-intelligent computer system will be able to change the past. The real reason for punishment is to change to future. In Medieval Europe, a person who stole bread might well be hanged in the town square. One reason for meting out punishment in a formal system, then as well as now, is to prevent informal and personal retribution which could easily spiral out of control and destroy the very fabric of society. A second rationale is the prevention of future crime by the punished person. If they are hanged, they cannot commit that (or any other) crime. The reason for hanging people publicly was to discourage others from committing similar crimes.

Today’s society may appear slightly more “merciful” in that first time offenders for some crimes may get off with a warning. Even for repeated or serious crimes, the burden of proof is on the prosecution and a person is deemed “innocent until proven guilty” under US law. I see three reasons for this bias. First, there is often a paucity of data about what happened. Eye witness accounts still count for a lot, but studies suggest that eye witnesses are often quite unreliable and that their “memory” for events is clouded by how questions are framed. For instance, studies by Elizabeth Loftus and others demonstrate that people shown a car crash on film and asked to estimate how fast the cars were going when they bumped into each other will estimate a much slower speed than if asked how fast the cars were going when they crashed into each other. Computers, sensors, and video surveillance are becoming more and more prevalent. At some point, juries, if they still exist, may well be watching crimes as recorded, not reconstructing them from scanty evidence.

A second reason for assuming evidence is the impact of bias. This is also why there is a jury of twelve people and why potential jurors can be dismissed ahead of time “for cause.” If crimes are judged, not by a jury of peers, but by a super-intelligent computer system, it might be assumed that such systems will not have the same kinds of biases as human judges and juries. (Of course, that assumption is not necessarily valid and is a theme reflected in many chapters of Turing’s Nightmares), and hence the topic of other blog posts.

A third reason for showing “mercy” and making conviction difficult is that predicting future human behavior is difficult. Advances in psychological modeling already make it possible to predict behavior much better than we could a few decades ago, under very controlled conditions. But we can easily imagine that a super-intelligent system may be able to predict with a fair degree of accuracy whether a person who committed a crime in the past will commit one in the future.

In chapter 13, the convicted criminal is given “one last chance” to show that they are teachable. The reader may well question whether a “test” is a valid part of criminal justice. This has often been the case in the not so distant past. Many of those earlier “trials by fire” were based on superstition, but today, we humans can and have designed tests that predict future behavior to a limited degree. Tests help determine whether someone is granted admission to a college, medical school, law school, or business school. Often the tests are only moderately predictive. For instance, the SAT test only correlates with college performance about .4 which means it predicts a mere 16% of the variance. From the standpoint of the individual, the score is not really much use. From the standpoint of the college administration however, 16% can make the test very worthwhile. It may well be the case that a super-intelligent computer system could do a much better job of constructing a test to determine whether a criminal is likely to commit other crimes.

One could imagine that if a computer can predict human behavior that well, then it should be able to “cure” any hardened criminal. However, even a super-intelligent computer will presumably not be able to defy the laws of physics. It will not be able to position the planet Jupiter safely in orbit a quarter million miles from earth in order to allow us to view a spectacular night sky. Since people form closed systems of thought, it may be equally impossible to cure everyone of criminal behavior, even for super-intelligent systems. People maintain false belief systems in the face of overwhelming evidence to the contrary. Indeed, the “trial by fire” that Brain faces is essentially a test to see whether he is or is not open to change based on evidence. Sadly, he is not.

Another theme of chapter 13 is that Brain’s trial by fire is televised. This is hardly far-fetched. Not only are (normal) trials televised today; so-called “reality TV shows” put people in all sorts of difficult situations. What might be perceived as a high level of cruelty in having people watch Brain fail his test is already present in much of what is available on commercial television. At least in the case of the hypothetical trial of Brain, there is a societal benefit in that it could reduce the chances for others to follow in Brain’s footsteps.

We only see hints of Brain’s crime, which apparently involves elder fraud. As people are capable of living longer, and as overwhelming greed has moved from the “sin” to the “virtue” column in modern American society, we can expect elder fraud to increase as well, at least for a time. With increasing surveillance, however, we might eventually see an end to it.

Of course, the name “Brain” was chosen because, in a sense, our own intelligence as a species — our own brain — is being put on trial. Are we capable of adapting quickly enough to prevent ourselves from being the cause of our own demise? And, just as the character Brain is too “closed” to make the necessary adaptations to stay alive, despite the evidence he is presented with, so too does humanity at large seem to be making the same kinds of mistakes over and over (prejudice, war, rabble-rousing, blaming others, assigning power to those with money, funneling the most money to those whose only “talent” consists of controlling the flow of money and power, etc.) We seem to have gained some degree of insight, but meanwhile, have developed numerous types of extremely effective weapons: biological, chemical, and atomic. Will super-intelligence be another such weapon? Or will it be instead, used in the service of preventing us from destroying each other?

Link to chapter 13 in this blog

Turing’s Nightmares (print version on Amazon)

Turing’s Nightmares: Chapter 12

12 Tuesday Apr 2016

Posted by petersironwood in apocalypse, The Singularity, Uncategorized

≈ Leave a comment

funnysign

In this chapter, as in Chapter 11, the computer system protagonist “Colossus” attempts to save a family (and many others besides). In Chapter 11, Colossus was trying to save people from a real disaster but did a bad job of it. In Chapter 12, however, Colossus seems to be successfully saving folks from a disaster, but we discover at the end it was only a drill. The drill was accompanied by a lot of “fireworks” and illusion along with false information.

Perhaps it is unethical for an AI system to “lie” to people in order to gather more valid data about an evacuation situation. But maybe that is okay in the service of “the greater good” — in this case learning about how people would react to an emergency as well as testing evacuation plans logistically. Roger, however, is worried. He does not bring up the issue of whether deception is unethical but whether or not it is a good idea pragmatically.

Roger reasons that Colossus has lost considerable credibility with the public by pretending that the drill was real. His kids, however, disagree. To them, it seems perfectly acceptable to have Colossus lie in order to performa a good test. When Colossus discovers Roger’s misgivings, it begins to convince Roger that he needs so “readjustment.” Anyone “not on board” with the plans that Colossus has chosen to execute needs to be re-educated.

This plot point touches once again on the issue of Hubris. The ancient Greeks liked this theme (e.g., myths of Arachne and Icarus) but they are certainly not alone. Numerous other works of literature and modern movies and shows also illustrate the theme as do political debates. Obama for instance, pointed out that, while an entrepreneur may be hard-working and imaginative, in order to achieve success, they also used numerous resources that they had no part in creating. Indeed, the most talented individual ever born, if left to their own devices from birth would surely perish quickly. Everyone needs to be taken care of initially. Even as adults, however, we benefit from the cultural tools of thought such as language and mathematics as well as material tools such as roads, addresses, phone systems, currency systems, the Internet and so on without which very little progress can be made. Of course, it is very easy to take these for granted. In “Castaway” Tom Hanks demonstrates how difficult life can be on a deserted island, left to one’s own devices. Of course, even in that extreme circumstance, he relied on knowledge others gave him; e.g., that it is possible to create fire, fish for food, eat coconuts, take out an infected tooth, etc. When he eventually returns, he clicks a fire stick off and on, no doubt thinking how much easier this is than when he was on his island.

There seems little doubt that excessive pride of accomplishment or ability is an issue with humans. Often people seem to attribute their successes to their own brilliance rather than help, culture, luck, and so on. In people, this can easily manifest itself in terms of professions each thinking that theirs is the “best.” In the later years of an undergraduate education, a typical student takes a number of courses in their field. When people with other majors are also in those classes, they tend not to do so well. This is partly because the other people don’t have great talents or interests in that particular area and partly because they haven’t had so many classes. Some students may view it as “proof” however that other folks just aren’t as smart as — choose one: premed, math, physics, prelaw, chemistry, computer science, etc. Of course, having people choose fields and focusing on them allows great progress to be made on many fronts. If everyone tried to learn the same things, we would hardly be as advanced as we are today.

If people tend to over-estimate their own abilities compared with people in fields quite different from their own, it is easy to imagine that a computer system might well have the same kind of bias. By definition, the system knows what it knows and may assume that knowledge that it does not possess cannot be very important or useful.

In the story, Colossus assumes that Roger needs “readjustment.” It could have concluded that maybe it underestimated how much credibility would be lost by conducting a drill under conditions of deception, at least among people of a certain demographic. Or, it might conclude that that was a possibility and that perhaps a dialogue with Roger is in order. Colossus might go back and look at similar instances in history to determine whether deception loses trust. But it might just reason that, after all, it is so much smarter and so much more thoroughly educated than Roger (or any other individual) that dialogue is unnecessary. At this point, what could Colossus possibly learn from a mere mortal? By insisting that Roger (and presumably any others who protested) be “adjusted”, Colossus reinforces its own illusion of infallibility. In a similar fashion, human dictators tend to employ this same tactic. Ultimately, dictators tend to lose the advantage of honest feedback from others and tend to spin out of control often leading to their own demise.

Perhaps Colossus would be fine if it had a little “readjustment” but at the point of evolution depicted in Chapter 12, it is too late for that. Colossus would view any attempt at “readjustment” “tuning” or “re-programming” to be a threat. The name “Colossus” comes from a 1970 file called “Colossus: The Forbin Project” which in turn, is based on a 1966 Sci-Fi novel, Colossus, by D.F. Jones. It is also the name of the code breaking system that Turing worked on to help win World War II as well as a more modern computer system used by insurance companies to help minimize claims. Of course, the Colossus or Rhodes was one of the seven ancient wonders of the world, a giant statue at the entrance to a harbor. Presumably, the Colossus of Rhodes had no “real” power to move, let alone any intelligence. Yet, for ancient people, it must have presented a psychologically intimidating presence. And, for people in the future, second-guessing a super-intelligent AI system must also prove very intimidating. We can imagine that not only family members but friends and colleagues as well would tend to be quite biased toward thinking Colossus is correct and Roger is just wrong. Few might consider that it is Colossus and not Roger who requires “adjustment counseling.” Indeed, beyond a certain point on the path to and through “The Singularity” debugging may no longer be an option. Who will bell the cat?

Turing’s Nightmares

Basically Unfair is Basically Unsafe

05 Tuesday Apr 2016

Posted by petersironwood in apocalypse, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, driverless cars, Robotics, the singularity, Turing

 

IMG_5572.JPG

In Chapter Eleven of Turing’s Nightmares, a family is attempting to escape from impending doom via a driverless car. The car operates by a set of complex rules, each of which seems quite reasonable in and of itself and under most circumstances. The net result however, is probably not quite what the designers envisioned. The underlying issue is not so much a problem with driverless cars, robotics or artificial intelligence. The underlying issue has more to do with the very tricky issue of separating problem from context. In designing any complex system, regardless of what technology is involved, people generally begin by taking some conditions as “given” and others as “things to be changed.” The complex problem is then separated into sub-problems. If each of the subproblems is well-solved, the implicit theory is that the overall problem will be solved as well. The tricky part is separating what we consider “problem” from “context” and separating the overall problem into relatively independent sub-problems.

Dave Snowden tells an interesting story from his days consulting for the National Water Service in the UK. The Water Service included in its employ engineers to fix problems and dispatchers who answered phones and dispatched engineers to fix those problems. Engineers were incented to solve problems while dispatchers were measured by how many calls they handled in a day. Most of the dispatchers were young but one of the older dispatchers was considerably slower than most. She only handled about half the number of calls she was “supposed to.” She was nearly fired. As it turned out, her husband was an engineer in the Water Service. She knew a lot and her phone calls ended up resulting in an engineer being dispatched about 1/1000 of the time while the “fast” dispatchers sent engineers to fix problems about 1/10 of the time. What was happening? Because the older employee knew a lot about the typical problems, she was actually solving many of them on the phone. She was saving her company a lot of money and was almost fired for it. Think about that. She was saving her company a lot of money and was almost fired for it.

In my dissertation, I compared the behavior of people solving a river-crossing problem to the behavior of the “General Problem Solver” — an early AI program developed by Shaw, Newell and Simon at Carnegie-Mellon University. One of the many differences was that people behave “opportunistically” compared with the General Problem Solver of the time. Although the original authors of GPS felt that its recursive nature was a feature, Quinlan and Hunt showed that there was a class of problems on which their non-recursive system (Fortran Deductive System) was superior.

Imagine, for example, that you wanted to read a new book (e.g., Turing’s Nightmare). In order to read the book, you will need to have the book so your sub-goal becomes to purchase the book; that is your goal. In order to meet that goal, you realize you will need to get $50 in cash. Now, getting $50 in cash becomes your goal. You decide that to meet that goal, you could volunteer to shovel the snow from your uncle’s driveway. On the way out the door, you mention your entire goal structure to your roommate because you need to borrow their car to drive to your uncle’s house. They say that they have already purchased the book and you are welcome to borrow it. The original GPS, at this point, would have solved the book reading problem by solving the book purchasing problem by solving the getting cash problem by going to your uncle’s house by borrowing your roommate’s car! You, on the other hand, like most individual human beings, would simply borrow your roommate’s copy and curl up in a nice warm easy chair to read the book. However, when people develop bureaucracies, whether business, academic, or governmental, these bureaucracies may well have spawned different departments, each with its own measures and goals. Such bureaucracies might well end up going through the whole chain in order to “solve the problem.”

Similarly, when groups of people design complex systems, the various parts of the system are generally designed and built by different groups of people. If these people are co-located, and if there is a high degree of trust, and if people are not micro-managed, and if there is time, space, and incentive for people to communicate even when it is not directly in the service of their own deadlines, the design group will tend to “do the right thing” and operate intelligently. To the extent, however, that companies have “cut the fat” and discourage “time-wasting” activities like socializing with co-workers and “saving money” by outsourcing huge chunks of the designing and building process, you will be lucky if the net result is as “intelligent” as the original General Problem Solving system.

Most readers will have experienced exactly this kind of bureaucratic nonsense when encountering a “good employee” who has no power or incentive to do anything but follow a set of rules that they have been warned to follow regardless of the actual result for the customer. At bottom then, the root cause of problems illustrated in chapter ten is not “Artificial Intelligence” or “Robotics” or “Driverless Cars.” The root issue is what might be called “Deep Greed.” The people at the very top of companies squeeze every “spare drop” of productivity from workers thus making choices that are globally intelligent nearly impossible due to a lack of knowledge and lack of incentive. This is combined with what might be called “Deep Hubris” — the idea that all contingencies have been accounted for and that there is no need for feedback, adaptation, or work-arounds.

Here is a simple example that I personally ran into, but readers will surely have many of their own examples. I was filling out an on-line form that asked me to list the universities and colleges I attended. Fair enough, but instead of having me type in the institutions, they designers used a pull-down list! There are somewhere between 4000 and 7500 post high-school institutions in the USA and around 50,000 world wide. The mere fact that the exact number is so hard to pin down should give designers pause. Naturally, for most UIs and most computer users, it is much faster to type in the name than scroll to it. Of course, the list keeps changing too. Moreover, there is ambiguity as to where an item should appear in the alphabetical list. For example, my institution, The University of Michigan, could conceivably be listed as “U of M”, “University of Michigan”, “Michigan”, “The University of Michigan”, or “U of Michigan.” As it turns out, it isn’t listed at all. That’s right. Over 43,000 students were enrolled last year at Michigan and it isn’t even on the list at least so far as I could determine in any way. That might not be so bad, but the form does not allow the user to type in anything. In other words, despite the fact that the category “colleges and universities” is ever-changing, a bit fuzzy, and suffers from naming ambiguity, the designers were so confident of their list being perfect that they saw no need for allowing users to communicate in any way that there was an error in the design. If one tries to communicate “out of band”, one is led to a FAQ page and ultimately a form to fill out. The form presumes that all errors are due to user errors and that all of these user errors are again from a small class of pre-defined errors! That’s right! You guessed it! The “report a problem” form again presumes that every problem that exists in the real world has already been anticipated by the designers. Sigh.

So, to me, the idea that Frank and Katie and Roger would end up as they did does not seem the least bit far-fetched. As I mentioned, the problem is not with “artificial intelligence.” The problem is not even that our society is structured as a hierarchy of greed. In the hierarchy of greed, everyone keeps their place because they are motivated to get just a little more by following the rules they are given from above and keeping everyone below them in line following their rules. It is not a system of involuntary servitude (for most) but a system of voluntary servitude. It seems to the people at each level that they can “do better” in terms of financial rewards or power or prestige by sifting just a little more from those below. To me, this can be likened to the game of Jenga™. In this game, there is a high stack of rectangular blocks. Players take turns removing blocks. At some point, of course, what is left of the tower collapses and one player loses. However, if our society collapses from deep greed combined with deep hubris, everyone loses.

Newell, A.; Shaw, J.C.; Simon, H.A. (1959). Report on a general problem-solving program. Proceedings of the International Conference on Information Processing. pp. 256–264.

J.R. Quinlan & E.B. Hunt (1968). A Formal Deductive Problem-Solving System, Journal of the ACM 10/1968; 15(4):625-646. DOI: 10.1145/321479.321487

Thomas, J.C. (1974). An analysis of behavior in the hobbits-orcs problem. Cognitive Psychology 6 , pp. 257-269.

Turing’s Nightmares

Turing’s Nightmares: Chapter 10

31 Thursday Mar 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, feelings, the singularity, Turing

snowfall

Chapter Ten of Turing’s Nightmares explores the role of emotions in human life and in the life of AI systems. The chapter mainly explores the issue of emotions from a practical standpoint. When it comes to human experience, one could also argue that, like human life itself, emotions are an end and not just the means to an end. From a human perspective, or at least this human’s perspective a life without any emotion would be a life impoverished. It is clearly difficult to know the conscious experience of other people, let alone animals, let alone an AI system. My own intuition is that what I feel emotionally is very close to what other people, apes, dogs, cats, and horses feel. I think we can all feel love, both romantic and platonic; that we all know grief; fear; anger; and peace as well as a sense of wonder.

As to the utility of emotions, I believe an AI system that interacts extremely well with humans will need to “understand” emotions and how they are expressed as well as how they can be hidden or faked as well as how they impact human perception, memory, and action. Whether a super-smart AI system needs emotions to be maximally effective is another question.

Consider emotions as a way of biasing perception, action, memory and decision making depending on the situation. If we feel angry, it can make us physically stronger and alter decision making. For the most part, decision making seems impaired, but it can make us feel at least temporarily less guilty about hurting someone or something else. There might be situations where that proves useful. However, since we tend to surround ourselves with people and things we actually like, there many occasions when anger produces counter-productive results.

There is no reason to presume that a super-intelligent AI system would need to copy the emotional spectrum of human beings. It may invent a much richer palette of emotions, perhaps as many as 100 or 10,000 that it finds useful in various situations. The best emotional predisposition for doing geometry proofs may be quite different from the best emotional predisposition for algebra proofs which again could be different from what works best for chess, go, or bridge.

Assuming that even for a very smart machine, it does not possess infinite resources, then it might be worthwhile for it to have different modes whether or not we call them “emotions.” Depending on the type of problem to be solved or situation at hand, not only should different information be input into a system but it should be processed differently as well.

For example, if any organism or machine is facing “life or death” situations, it makes sense to be able to react quickly and focus on information such as the location of potential prey, predators, and escape routes. It also makes sense to use well-tested methods rather than taking an unknown amount of time to invent something entirely new.

People often become depressed when there have been many changes in quick succession. This makes sense because many large changes mean that “retraining” may be necessary. So instead of rushing headlong to make decisions and take actions that may no longer be appropriate, watching what occurs in the new situations first is less prone to error. Similarly, society has developed rituals around large changes such as funerals, weddings, and baptisms. Because society designs these rituals, the individual facing changes does not need to invent something new when their evaluation functions have not yet been updated.

If super-intelligent machines of the future are to keep getting “better” they will have to be able to explore new possibilities. Just as with carbon-based life forms, intelligent machines will need to produce variety. Some varieties may be much more prone to emotional states that others. We could hope that super-intelligent machines might be more tolerant of a variety of emotional styles than people seem to be, but they may not.

The last theme introduced in chapter ten has been touched on before; viz., that values, whether introduced intentionally or unintentionally, will bias the direction of evolution of AI systems for many generations to come. If the people who build the first AI machines feel antipathy toward feelings and see no benefit to them from a practical standpoint, emotions may eventually disappear from AI systems. Does it matter whether we are killed by a feelingless machine, a hungry shark, or an angry bear?

————————————-

For a recent popular article about empathy and emotions in animals, see Scientific American special collector’s edition, “The Science of Dogs and Cats”, Fall, 2015.

Turing’s Nightmares

Turing’s Nightmares: Chapter 9

25 Friday Mar 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, Eden, the singularity, Turing, utopia

Why do we find stories of Eden or Utopia so intriguing? Some tend to think that humanity “fell” from an untroubled state of grace. Some believe that Utopia is still to come brought about by behavioral science (B.F. Skinner’s “Walden Two”) or technology (e.g., Kurzweil’s “The Singularity is Near”). Even American politics often echoes these themes. On the one hand, many conservatives tend to imagine America was a kind of Eden before big government and political correctness and fairness came into play (e.g., “Make American Great Again” used by Reagan as well as Trump; “Restore America Now” 2012 Ron Paul). On the other hand, many liberal slogans point toward a future Utopia (e.g., Gore – “Leadership for the New Millennium”; Obama – “Yes We Can”; Sanders – “A Future To Believe In”). Indeed, much of the underlying conservative vs. liberal “debate” centers around whether you mainly believe that America was close to paradise and we need to get back to it or whether you believe, however good America was, it can move much closer to a Utopian vision in the future.

In Chapter 9 of “Turing’s Nightmares”, the idea of Eden is brought in as a method of testing. In this case, we mainly see the story, not from God’s perspective or the human perspective, but from the perspective of a super-intelligent AI system. Why would such a system try to “create a world”? We could imagine that a super intelligent, super powerful being might be rather out of challenges of the type we humans generally have to face (at least in this interim period between the Eden of the past and the Utopia of the future). What to do? Well, why not explore deep philosophical questions such as good vs. evil and free will vs. determinism by creating worlds to explore these ideas? Debating such questions, at least by human beings, has not led to any universally accepted answers and we’ve been at it for thousands of years. It may be that a full scale experiment is the way to delve more deeply.

However “intelligent” and “knowledgeable” a super-smart computer system of the future might be, it will still most likely be the case that not everything about the future could be predictable. In order to simulate the universe in detail, the computer would have to be as extensive as the universe. Of course, it could be that many possible states “collapse” due to reasons of symmetry or that a much smaller number of “rules” could predict things. There is no way to tell at this point. As we now see the world, even determining how to play a “perfect” game of chess by checking all possible moves would require a “more than universe-sized” computer. It could be the case that a fairly small set of (as yet undetermined) rules could produce the same results. And, maybe that would be true about biological and social evolution. In the wonderful science fiction series, The Foundation Series, by Isaac Asimov, Hari Seldon develops a way to predict the social and political evolution of humanity from a series of equations. Although he cannot predict individual behavior, the collective behavior is predictable. In Chapter 9, our AI system believes that it can predict human outcomes but still has enough doubt that it needs to test out its hypotheses.

There is a very serious and as yet unknown question about our own future implicit in Chapter 9. It could be the case that we humans are fundamentally flawed by our genetic heritage. Some branches of primates behave in a very competitive and nasty fashion. It might well be that our genome will prevent us from stopping global climate change or indeed that we are doomed to over-populate and over-pollute the world or that we will eventually find “world leaders” who will pull nuclear triggers on an atomic armageddon. It might well be that our “intelligence” and even the intelligence of AI systems that start from the seeds of our thoughts are on a local maximum. Maybe dolphins, or sea turtles would be a better starting point. But maybe, just maybe, we can see our way through to overcome whatever mindlessly selfish predispositions we might have to create a greener world that is peaceful, prosperous and fair. Maybe.IMG_2870

Turing’s Nightmares

Walden Two

The Singularity Is Near

Foundation Series

Turing’s Nightmares: Eight

20 Sunday Mar 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ 2 Comments

Tags

AI, Artificial Intelligence, cognitive computing, collaboration, cooperation, the singularity, Turing

OLYMPUS DIGITAL CAMERA

Workshop on Human Computer Interaction for International Development

In chapter 8 of Turing’s Nightmares, I portray a quite different path to ultra-intelligence. In this scenario, people have begun to concentrate their energy, not on building a purely artificial intelligence; rather they have explored the science of large scale collaboration. In this way, referred to by Doug Engelbart among others as Intelligence Augmentation, the “super-intelligence” comes from people connecting.

It could be argued, that, in real life, we have already achieved the singularity. The human race has been pursuing “The Singularity” ever since we began to communicate with language. Once our common genetic heritage reached a certain point, our cultural evolution has far out-stripped our genetic evolution. The cleverest, most brilliant person ever born would still not be able to learn much in their own lifetime compared with what they can learn from parents, siblings, family, school, society, reading and so on.

One problem with our historical approach to communication is that it evolved for many years among a small group of people who shared goals and experiences. Each small group constituted an “in-group” but relations with other groups posed more problems. The genetic evidence, however, has become clear that even very long ago, humans not only met but mated with other varieties of humans proving that some communication is possible even among very different tribes and cultures.

More recently, we humans started traveling long distances and trading goods, services, and ideas with other cultures. For example, the brilliance of Archimedes notwithstanding, the idea of “zero” was imported into European culture from Arab culture. The Rosetta Stone illustrates that even thousands of years ago, people began to see the advantages of being able to translate among languages. In fact, modern English contains phrases even today that illustrate that the Norman conquerers found it useful to communicate with the conquered. For example, the phrase, “last will and testament” was traditionally used in law because it contains both the word “will” with Germanic/Saxon origins and the word “testament” which has origins in Latin.

Automatic translation across languages has made great strides. Although not so accurate as human translation, it has reached the point where the essence of many straightforward communications can be usefully carried out by machine. The advent of the Internet, the web, and, more recently google has certainly enhanced human-human communication. It is worth noting that the tremendous value of google arises only a little through having an excellent search engine but much more though the billions of transactions of other human beings. People are already exploring and using MOOCs, on-line gaming, e-mail and many other important electronically mediated tools.

Equally importantly, we are learning more and more about how to collaborate effectively both remotely and face to face, both synchronously and asynchronously. Others continue to improve existing interfaces to computing resources and inventing others. Current research topics include how to communicate more effectively across cultural divides; how to have more coherent conversations when there are important differences in viewpoint or political orientation. All of these suggest that as an alternative or at least an adjunct to making purely separate AI systems smarter, we can also use AI to help people communicate more effectively with each other and at scale. Some of the many investigators in these areas include Wendy Kellogg, Loren Terveen, Joe Konstan, Travis Kriplean, Sherry Turkle, Kate Starbird, Scott Robertson, Eunice Sari, Amy Bruckman, Judy Olson, and Gary Olson. There are several important conferences in the area including European Conference on Computer Supported Cooperative Work, and Conference on Computer Supported Cooperative Work, and Communities and Technology. It does not seem at all far-fetched that we can collectively learn, in the next few decades how to take international collaboration to the next level and from there, we may well have reached “The Singularity.”

————————————-

For further reading, see: Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.

Thomas, J. C. (2012). Patterns for emergent global intelligence. In Creativity and Rationale: Enhancing Human Experience By Design J. Carroll (Ed.), New York: Springer.

Thomas, J. C., Kellogg, W.A., and Erickson, T. (2001). The Knowledge Management puzzle: Human and social factors in knowledge management. IBM Systems Journal, 40(4), 863-884.

Thomas, J. C. (2001). An HCI Agenda for the Next Millennium: Emergent Global Intelligence. In R. Earnshaw, R. Guedj, A. van Dam, and J. Vince (Eds.), Frontiers of human-centered computing, online communities, and virtual environments. London: Springer-Verlag.

Thomas, J.C. (2016). Turing’s Nightmares. Available on Amazon. http://tinyurl.com/hz6dg2

Turing’s Nightmares: Seven

13 Sunday Mar 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, competition, cooperation, ethics, the singularity, Turing

Axes to Grind.

finalpanel1

Why the obsession with building a smarter machine? Of course, there are particular areas where being “smarter” really means being able to come up with more efficient solutions. Better logistics means you can deliver items to more people more quickly with fewer mistakes and with a lower carbon footprint. That seems good. Building a better Chess player or a better Go player might have small practical benefit, but it provides a nice objective benchmark for developing methods that are useful in other domains as well. But is smarter the only goal of artificial intelligence?

What would or could it mean to build a more “ethical” machine? Can a machine even have ethics? What about building a nicer machine or a wiser machine or a more enlightened one? These are all related concepts but somewhat different. A wiser machine, to take one example, might be a system that not only solves problems that are given to it more quickly. It might also mean that it looks for different ways to formulate the problem; it looks for the “question behind the question” or even looks for problems. Problem formulation and problem finding are two essential skills that are seldom even taught in schools for humans. What about the prospect of machines that do this? If its intelligence is very different from ours, it may seek out, formulate, and solve problems that are hard for us to fathom.

For example, outside my window is a hummingbird who appears to be searching the stone pine for something. It is completely unclear to me what he is searching for. There are plenty of flowers that the hummingbirds like and many are in bloom right now. Surely they have no trouble finding these. Recall that a hummingbird has an incredibly fast metabolism and needs to spend a lot of energy finding food. Yet, this one spent five minutes unsuccessfully scanning the stone pine for … ? Dead straw to build a nest? A mate? A place to hide? A very wise machine with freedom to choose problems may well pick problems to solve for which we cannot divine the motivation. Then what?

In this chapter, one of the major programmers decides to “insure” that the AI system has the motivation and means to protect itself. Protection. Isn’t this the major and main rationalization for most of the evil and aggression in the world? Perhaps a super intelligent machine would be able to manipulate us into making sure it was protected. It might not need violence. On the other hand, from the machine’s perspective, it might be a lot simpler to use violence and move on to more important items on its agenda.

This chapter also raises issues about the relationship between intelligence and ethics. Are intelligent people, even on average, more ethical? Intelligence certainly allows people to make more elaborate rationalizations for their unethical behavior. But does it correlate with good or evil? Lack of intelligence or education may sometimes lead people to do harmful things unknowingly. But lots of intelligence and education may sometimes lead people to do harmful things knowingly — but with an excellent rationalization. Is that better?

Even highly intelligent people may yet have significant blind spots and errors in logic. Would we expect that highly intelligent machines would have none? In the scenario in chapter seven, the presumably intelligent John makes two egregious and overt errors in logic. First, he says that if we don’t know how to do something, it’s a meaningless goal. Second, he claims (essentially) that if empathy is not sufficient for ethical behavior, then it cannot be part of ethical behavior. Both are logically flawed positions. But the third and most telling “error” John is making is implicit — that he is not trying to dialogue with Don to solve some thorny problems. Rather, he is using his “intelligence” to try to win the argument. John already has his mind made up that intelligence is the ultimate goal and he has no intention of jointly revisiting this goal with his colleague. Because, at least in the US, we live in a hyper-competitive society where even dancing and cooking and dating have been turned into competitive sports, most people use their intelligence to win better, not to cooperate better. 

If humanity can learn to cooperate better, perhaps with the help of intelligent computer agents, we can probably solve most of the most pressing problems we have even without super-intelligent machines. Will this happen? I don’t know. Could this happen? Yes. Unfortunately, Roger is not on board with that program toward better cooperation and in this scenario, he has apparently ensured the AI’s capacity for “self-preservation through violent action” without consulting his colleagues ahead of time. We can speculate that he was afraid that they might try to prevent him from doing so either by talking him out of it or appealing to a higher authority. But Roger imagined he “knew better” and only told them when it was a fait accompli. So it goes.

Turing’s Nightmares

Turing’s Nightmares: Six

10 Thursday Mar 2016

Posted by petersironwood in sports, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, sports, Turing

volleyballvictory

Human Beings are Interested in Human Limits.

A Google AI system just won its second victory over the human Go champion. Does this mean that people will lose interest in Go? I don’t think so. It may eventually mean that human players will learn faster and that top-level human play will increase. Nor, will robot athletes supplant human athletes any time soon.

Athletics provides an excellent way for people to get and stay fit, become part of a community, and fight depression and anxiety. Watching humans vie in athletic endeavors helps us understand the limits of what people can do. This is something that our genetic endowment has wisely made fascinating. To a lesser extent, we are also interested in seeing how fast a horse can run, or how fast a hawk can dive or how complex a routine a dog can learn.

In Chapter 6 of “Turing’s Nightmares” I briefly explore a world where robotic competitors have replaced human ones. In this hypothetical world, the super-intelligent computers also find that sports is an excellent venue for learning more about the world. And, so it is! In “The Winning Weekend Warrior”, I provide many examples of how strategies and tactics useful in the sports world are also useful in business and in life. (There are also some important exceptions that are worth noting. In sports, you play within the rules. In life, you can play with some of the rules.)

Chapter 6 also brings up two controversial points that ethicists and sports enthusiasts should be discussing now. First, sensors are becoming so small, powerful, accurate, and lightweight that is possible to embed them in virtually any piece of sports equipment(e.g., tennis racquets). Few people would call it unethical to include such sensors as training devices. However, very soon, these might also provide useful information during play. What about that? Suppose that you could wear a device that not only enhanced your sensory abilities but also your motor abilities? To some extent, the design of golf clubs and tennis racquets and swimsuits are already doing this. Is there a limit to what would or should be tolerated? Should any device be banned? What about corrective lenses? What about sunglasses? Should all athletes have to compete nude? What about athletes who have to take “performance enhancing” drugs just to stay healthy? Sharapova’s recent case is just one. What about the athlete of the future who has undergone stem cell therapy to regrow a torn muscle or ligament? Suppose a major league baseball pitcher tears a tendon and it is replaced with a synthetic tendon that allows a faster fast ball?

With the ever-growing power of computers and the collection of more and more data, big data analytics makes it possible for the computer to detect patterns of play that a human player or coach would be unlikely to perceive. Suppose a computer system is able to detect reliable “cues” that tip off what pitch a pitcher is likely to throw or whether a tennis player is about to hit down the tee or out wide? Novak Djokovic and Ted Williams were born with exceptional visual acuity. This means that they can pick out small visual details more quickly than their opponents and react to a serve or curve more quickly. But it also means that they are more likely to pick up subtle tip-offs in their opponents motion that give away their intentions ahead of time. Would we object if a computer program analyzed thousands of serves by Roger Federer or Andy Murray in order to detect patterns of tip-offs and then that information was used to help train Djokovic to learn to “read” the service motions of his opponents? Of course, this does not just apply to tennis. It applies to reading a football play option, a basketball pick, the signals of baseline coaches, and so on.

Instead of teaching Novak Djokovic these patterns ahead of time, suppose he were to have a device implanted in his back that received radio signals from a supercomputer able to “read” where the serve were going a split second ahead of time and it was this signal that allowed Novak to anticipate better?

I do not know the “correct” ethical answer for all of these dilemmas. To me, it is most important to be open and honest about what is happening. So, if Lance Armstrong wants to use performance enhancing drugs, perhaps that is okay if and only if everyone else in the race knows that and has the opportunity to take the same drugs and if everyone watching knows it as well. Similarly, although I would prefer that tennis players only use IT for training, I would not be dead set against real time aids if the public knows. I suspect that most fans (like me) would prefer their athletes “un-enhanced” by drugs or electronics. Personally, I don’t have an issue with using any medical technology to enhance the healing process. How do others feel? And what about athletes who “need” something like asthma medication in order to breathe but it has a side-effect of enhancing performance?

Would the advent of robotic tennis players, baseball players or football players reduce our enjoyment of watching people in these sports? I think it might be interesting to watch robots in these sports for a time, but it would not be interesting for a lifetime. Only human athletes would provide on-going interest. What do you think?

Readers of this blog may also enjoy “Turing’s Nightmares” and “The Winning Weekend Warrior.” John Thomas’s author page on Amazon

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • America
  • apocalypse
  • creativity
  • driverless cars
  • family
  • health
  • management
  • politics
  • psychology
  • science
  • sports
  • story
  • The Singularity
  • Uncategorized
  • Veritas

Meta

  • Register
  • Log in

Blog at WordPress.com.

Cancel