• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Monthly Archives: April 2016

Love Your Enemies?

21 Thursday Apr 2016

Posted by petersironwood in health, Uncategorized

≈ Leave a comment

Tags

cancer, disease, health care, pathogens, sports, wellness

IMG_4429

(A short break from discussions of Turing’s Nightmares which we will return to tomorrow).

Jesus reportedly said this. When it comes to other human beings, one could take this attitude for religious reasons because we are all the creatures of God. One could also take this stance because, after all, we humans are all very closely related genetically. We like to say “Are you related to that person?” We share 40 percent of our genes with crayfish and 90 per cent with horses. We share over 99% with so-called “unrelated” people. It makes no sense to call them “unrelated.” But what about when it comes to non-human diseases? Can we “love” deadly bacteria, viruses, and cancer cells?

There is a sense in which the answer may be “yes”, not in the sense that we feel affection for them, but in the sense that we need to understand them. As explained, in The Winning Weekend Warrior, if you can “understand” where your sports opponent is coming from, empathize with their perspective, see what they like and don’t like, you can do a much better job of winning the points, the games, and competing well.

When it comes to disease, I think most people view pathogens as so “evil” or “despicable” that they never bother to ask themselves what the pathogen “wants.” Because of this attitude, the vast majority of treatments are designed to “kill off” the pathogen. A few approaches are to boost the body’s natural defense mechanisms. But let us examine, for a moment, what other approaches are possible if we instead try to learn to see the world from the perspective of the pathogens.

The Pied Piper Approach. In the fairy tale about the Pied Piper, a talented musician gets rid of rats in a town by playing beautiful music so that they follow him out of the town. When the townspeople renege on their promise to pay him, he wreaks revenge by using his music to lead all the children out of the town never to be heard from again. Suppose we apply such a technique to bacteria, viruses, or metastasizing cancer cells. Instead of trying to poison and therefore kill cancer cells inside the human body (which typically also kills many healthy cells), suppose we discovered for a particular type of cancer cells what the environment was that they found most “attractive.” We could imagine applying a gradient of whatever that environment was so that, instead of migrating to other organs inside the human body, they found it more desirable to migrate to something outside but “connected” to the human body via a one-way shunt. Perhaps such an approach could be applied to viruses, bacteria, or protozoan infections as well. Of course, the shunt might not “really be” something “good for” the virus, cancer, etc., but merely something that appears to be so based on a deep understanding of how these enemy cells “perceive” the world.

The Entrapment Approach. The old saying goes that you can “catch more flies with honey than with vinegar.” Honey is attractive but also “sticky” so that the flies cannot easily leave the honey. Similarly, vice officers sometimes perform “sting operations” to catch people attempting to buy drugs or use prostitutes. One can imagine that various “traps” could be laid inside the body. The “trap” would consist of a “bait” inside the trap along with one-way “valves” that make it easy for pathogens to get into the trap but difficult or impossible to leave the traps. This approach is already used for “pantry moths.” In effect, little traps have a tiny amount of a pheromone that the moths find fairly irresistible. The moths go inside the traps “in order” to find a mate, but instead find themselves trapped inside.

The Mimicry Approach. Monarch butterflies “taste bad” to a number of predators. A number of other butterflies, which do not “taste bad,” have evolved to look very similar to Monarchs in order to discourage predators. When applied to human disease, this approach would make people look “deadly” or “dangerous” to pathogens. Such an approach would require that we understand what sorts of situations these pathogens would want to avoid. As in the case of the Monarch mimics, there may be a disconnect between what is really toxic to the pathogens and what appears to be toxic. There may be chemicals that are harmless to humans (and even to the pathogens) but trigger an aversive response in the pathogen so that they “steer away” from humans. For larger pests, such as mosquitos, there may be clothing that, to the mosquito appears to be covered in, what for them, appear to be deadly enemies.

These are just three of many possible variations on a theme. The theme is to understand what pathogens or pests “want” as a goal state and how they “perceive” the world. The, we use knowledge of these two things to design a way to have them, from their perspective, appear to move toward their goals (or away from undesirable states) without harming human lives in the process.

 

Chapter 13: Turing’s Nightmares

17 Sunday Apr 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ 2 Comments

Tags

AI, Artificial Intelligence, cognitive computing, crime and punishment, ethics, the singularity

CRIME AND PUNISHMENT

PicturesfromiPhone2 033

Chapter 13 of Turing’s Nightmares concerns itself with issues of crime and punishment. Our current system of criminal justice has evolved over thousands of years. Like everything else about modern life, it is based on a set of assumptions. While accurate DNA testing (and other modern technologies) have profoundly impacted the criminal justice system, super-intelligence and ubiquitous sensors and computing could well have even more profound impacts.

We often talk of punishment as being what is “deserved” for the crime. But we cannot change the past. It seems highly unlikely that even a super-intelligent computer system will be able to change the past. The real reason for punishment is to change to future. In Medieval Europe, a person who stole bread might well be hanged in the town square. One reason for meting out punishment in a formal system, then as well as now, is to prevent informal and personal retribution which could easily spiral out of control and destroy the very fabric of society. A second rationale is the prevention of future crime by the punished person. If they are hanged, they cannot commit that (or any other) crime. The reason for hanging people publicly was to discourage others from committing similar crimes.

Today’s society may appear slightly more “merciful” in that first time offenders for some crimes may get off with a warning. Even for repeated or serious crimes, the burden of proof is on the prosecution and a person is deemed “innocent until proven guilty” under US law. I see three reasons for this bias. First, there is often a paucity of data about what happened. Eye witness accounts still count for a lot, but studies suggest that eye witnesses are often quite unreliable and that their “memory” for events is clouded by how questions are framed. For instance, studies by Elizabeth Loftus and others demonstrate that people shown a car crash on film and asked to estimate how fast the cars were going when they bumped into each other will estimate a much slower speed than if asked how fast the cars were going when they crashed into each other. Computers, sensors, and video surveillance are becoming more and more prevalent. At some point, juries, if they still exist, may well be watching crimes as recorded, not reconstructing them from scanty evidence.

A second reason for assuming evidence is the impact of bias. This is also why there is a jury of twelve people and why potential jurors can be dismissed ahead of time “for cause.” If crimes are judged, not by a jury of peers, but by a super-intelligent computer system, it might be assumed that such systems will not have the same kinds of biases as human judges and juries. (Of course, that assumption is not necessarily valid and is a theme reflected in many chapters of Turing’s Nightmares), and hence the topic of other blog posts.

A third reason for showing “mercy” and making conviction difficult is that predicting future human behavior is difficult. Advances in psychological modeling already make it possible to predict behavior much better than we could a few decades ago, under very controlled conditions. But we can easily imagine that a super-intelligent system may be able to predict with a fair degree of accuracy whether a person who committed a crime in the past will commit one in the future.

In chapter 13, the convicted criminal is given “one last chance” to show that they are teachable. The reader may well question whether a “test” is a valid part of criminal justice. This has often been the case in the not so distant past. Many of those earlier “trials by fire” were based on superstition, but today, we humans can and have designed tests that predict future behavior to a limited degree. Tests help determine whether someone is granted admission to a college, medical school, law school, or business school. Often the tests are only moderately predictive. For instance, the SAT test only correlates with college performance about .4 which means it predicts a mere 16% of the variance. From the standpoint of the individual, the score is not really much use. From the standpoint of the college administration however, 16% can make the test very worthwhile. It may well be the case that a super-intelligent computer system could do a much better job of constructing a test to determine whether a criminal is likely to commit other crimes.

One could imagine that if a computer can predict human behavior that well, then it should be able to “cure” any hardened criminal. However, even a super-intelligent computer will presumably not be able to defy the laws of physics. It will not be able to position the planet Jupiter safely in orbit a quarter million miles from earth in order to allow us to view a spectacular night sky. Since people form closed systems of thought, it may be equally impossible to cure everyone of criminal behavior, even for super-intelligent systems. People maintain false belief systems in the face of overwhelming evidence to the contrary. Indeed, the “trial by fire” that Brain faces is essentially a test to see whether he is or is not open to change based on evidence. Sadly, he is not.

Another theme of chapter 13 is that Brain’s trial by fire is televised. This is hardly far-fetched. Not only are (normal) trials televised today; so-called “reality TV shows” put people in all sorts of difficult situations. What might be perceived as a high level of cruelty in having people watch Brain fail his test is already present in much of what is available on commercial television. At least in the case of the hypothetical trial of Brain, there is a societal benefit in that it could reduce the chances for others to follow in Brain’s footsteps.

We only see hints of Brain’s crime, which apparently involves elder fraud. As people are capable of living longer, and as overwhelming greed has moved from the “sin” to the “virtue” column in modern American society, we can expect elder fraud to increase as well, at least for a time. With increasing surveillance, however, we might eventually see an end to it.

Of course, the name “Brain” was chosen because, in a sense, our own intelligence as a species — our own brain — is being put on trial. Are we capable of adapting quickly enough to prevent ourselves from being the cause of our own demise? And, just as the character Brain is too “closed” to make the necessary adaptations to stay alive, despite the evidence he is presented with, so too does humanity at large seem to be making the same kinds of mistakes over and over (prejudice, war, rabble-rousing, blaming others, assigning power to those with money, funneling the most money to those whose only “talent” consists of controlling the flow of money and power, etc.) We seem to have gained some degree of insight, but meanwhile, have developed numerous types of extremely effective weapons: biological, chemical, and atomic. Will super-intelligence be another such weapon? Or will it be instead, used in the service of preventing us from destroying each other?

Link to chapter 13 in this blog

Turing’s Nightmares (print version on Amazon)

Turing’s Nightmares: Chapter 12

12 Tuesday Apr 2016

Posted by petersironwood in apocalypse, The Singularity, Uncategorized

≈ Leave a comment

funnysign

In this chapter, as in Chapter 11, the computer system protagonist “Colossus” attempts to save a family (and many others besides). In Chapter 11, Colossus was trying to save people from a real disaster but did a bad job of it. In Chapter 12, however, Colossus seems to be successfully saving folks from a disaster, but we discover at the end it was only a drill. The drill was accompanied by a lot of “fireworks” and illusion along with false information.

Perhaps it is unethical for an AI system to “lie” to people in order to gather more valid data about an evacuation situation. But maybe that is okay in the service of “the greater good” — in this case learning about how people would react to an emergency as well as testing evacuation plans logistically. Roger, however, is worried. He does not bring up the issue of whether deception is unethical but whether or not it is a good idea pragmatically.

Roger reasons that Colossus has lost considerable credibility with the public by pretending that the drill was real. His kids, however, disagree. To them, it seems perfectly acceptable to have Colossus lie in order to performa a good test. When Colossus discovers Roger’s misgivings, it begins to convince Roger that he needs so “readjustment.” Anyone “not on board” with the plans that Colossus has chosen to execute needs to be re-educated.

This plot point touches once again on the issue of Hubris. The ancient Greeks liked this theme (e.g., myths of Arachne and Icarus) but they are certainly not alone. Numerous other works of literature and modern movies and shows also illustrate the theme as do political debates. Obama for instance, pointed out that, while an entrepreneur may be hard-working and imaginative, in order to achieve success, they also used numerous resources that they had no part in creating. Indeed, the most talented individual ever born, if left to their own devices from birth would surely perish quickly. Everyone needs to be taken care of initially. Even as adults, however, we benefit from the cultural tools of thought such as language and mathematics as well as material tools such as roads, addresses, phone systems, currency systems, the Internet and so on without which very little progress can be made. Of course, it is very easy to take these for granted. In “Castaway” Tom Hanks demonstrates how difficult life can be on a deserted island, left to one’s own devices. Of course, even in that extreme circumstance, he relied on knowledge others gave him; e.g., that it is possible to create fire, fish for food, eat coconuts, take out an infected tooth, etc. When he eventually returns, he clicks a fire stick off and on, no doubt thinking how much easier this is than when he was on his island.

There seems little doubt that excessive pride of accomplishment or ability is an issue with humans. Often people seem to attribute their successes to their own brilliance rather than help, culture, luck, and so on. In people, this can easily manifest itself in terms of professions each thinking that theirs is the “best.” In the later years of an undergraduate education, a typical student takes a number of courses in their field. When people with other majors are also in those classes, they tend not to do so well. This is partly because the other people don’t have great talents or interests in that particular area and partly because they haven’t had so many classes. Some students may view it as “proof” however that other folks just aren’t as smart as — choose one: premed, math, physics, prelaw, chemistry, computer science, etc. Of course, having people choose fields and focusing on them allows great progress to be made on many fronts. If everyone tried to learn the same things, we would hardly be as advanced as we are today.

If people tend to over-estimate their own abilities compared with people in fields quite different from their own, it is easy to imagine that a computer system might well have the same kind of bias. By definition, the system knows what it knows and may assume that knowledge that it does not possess cannot be very important or useful.

In the story, Colossus assumes that Roger needs “readjustment.” It could have concluded that maybe it underestimated how much credibility would be lost by conducting a drill under conditions of deception, at least among people of a certain demographic. Or, it might conclude that that was a possibility and that perhaps a dialogue with Roger is in order. Colossus might go back and look at similar instances in history to determine whether deception loses trust. But it might just reason that, after all, it is so much smarter and so much more thoroughly educated than Roger (or any other individual) that dialogue is unnecessary. At this point, what could Colossus possibly learn from a mere mortal? By insisting that Roger (and presumably any others who protested) be “adjusted”, Colossus reinforces its own illusion of infallibility. In a similar fashion, human dictators tend to employ this same tactic. Ultimately, dictators tend to lose the advantage of honest feedback from others and tend to spin out of control often leading to their own demise.

Perhaps Colossus would be fine if it had a little “readjustment” but at the point of evolution depicted in Chapter 12, it is too late for that. Colossus would view any attempt at “readjustment” “tuning” or “re-programming” to be a threat. The name “Colossus” comes from a 1970 file called “Colossus: The Forbin Project” which in turn, is based on a 1966 Sci-Fi novel, Colossus, by D.F. Jones. It is also the name of the code breaking system that Turing worked on to help win World War II as well as a more modern computer system used by insurance companies to help minimize claims. Of course, the Colossus or Rhodes was one of the seven ancient wonders of the world, a giant statue at the entrance to a harbor. Presumably, the Colossus of Rhodes had no “real” power to move, let alone any intelligence. Yet, for ancient people, it must have presented a psychologically intimidating presence. And, for people in the future, second-guessing a super-intelligent AI system must also prove very intimidating. We can imagine that not only family members but friends and colleagues as well would tend to be quite biased toward thinking Colossus is correct and Roger is just wrong. Few might consider that it is Colossus and not Roger who requires “adjustment counseling.” Indeed, beyond a certain point on the path to and through “The Singularity” debugging may no longer be an option. Who will bell the cat?

Turing’s Nightmares

Basically Unfair is Basically Unsafe

05 Tuesday Apr 2016

Posted by petersironwood in apocalypse, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, driverless cars, Robotics, the singularity, Turing

 

IMG_5572.JPG

In Chapter Eleven of Turing’s Nightmares, a family is attempting to escape from impending doom via a driverless car. The car operates by a set of complex rules, each of which seems quite reasonable in and of itself and under most circumstances. The net result however, is probably not quite what the designers envisioned. The underlying issue is not so much a problem with driverless cars, robotics or artificial intelligence. The underlying issue has more to do with the very tricky issue of separating problem from context. In designing any complex system, regardless of what technology is involved, people generally begin by taking some conditions as “given” and others as “things to be changed.” The complex problem is then separated into sub-problems. If each of the subproblems is well-solved, the implicit theory is that the overall problem will be solved as well. The tricky part is separating what we consider “problem” from “context” and separating the overall problem into relatively independent sub-problems.

Dave Snowden tells an interesting story from his days consulting for the National Water Service in the UK. The Water Service included in its employ engineers to fix problems and dispatchers who answered phones and dispatched engineers to fix those problems. Engineers were incented to solve problems while dispatchers were measured by how many calls they handled in a day. Most of the dispatchers were young but one of the older dispatchers was considerably slower than most. She only handled about half the number of calls she was “supposed to.” She was nearly fired. As it turned out, her husband was an engineer in the Water Service. She knew a lot and her phone calls ended up resulting in an engineer being dispatched about 1/1000 of the time while the “fast” dispatchers sent engineers to fix problems about 1/10 of the time. What was happening? Because the older employee knew a lot about the typical problems, she was actually solving many of them on the phone. She was saving her company a lot of money and was almost fired for it. Think about that. She was saving her company a lot of money and was almost fired for it.

In my dissertation, I compared the behavior of people solving a river-crossing problem to the behavior of the “General Problem Solver” — an early AI program developed by Shaw, Newell and Simon at Carnegie-Mellon University. One of the many differences was that people behave “opportunistically” compared with the General Problem Solver of the time. Although the original authors of GPS felt that its recursive nature was a feature, Quinlan and Hunt showed that there was a class of problems on which their non-recursive system (Fortran Deductive System) was superior.

Imagine, for example, that you wanted to read a new book (e.g., Turing’s Nightmare). In order to read the book, you will need to have the book so your sub-goal becomes to purchase the book; that is your goal. In order to meet that goal, you realize you will need to get $50 in cash. Now, getting $50 in cash becomes your goal. You decide that to meet that goal, you could volunteer to shovel the snow from your uncle’s driveway. On the way out the door, you mention your entire goal structure to your roommate because you need to borrow their car to drive to your uncle’s house. They say that they have already purchased the book and you are welcome to borrow it. The original GPS, at this point, would have solved the book reading problem by solving the book purchasing problem by solving the getting cash problem by going to your uncle’s house by borrowing your roommate’s car! You, on the other hand, like most individual human beings, would simply borrow your roommate’s copy and curl up in a nice warm easy chair to read the book. However, when people develop bureaucracies, whether business, academic, or governmental, these bureaucracies may well have spawned different departments, each with its own measures and goals. Such bureaucracies might well end up going through the whole chain in order to “solve the problem.”

Similarly, when groups of people design complex systems, the various parts of the system are generally designed and built by different groups of people. If these people are co-located, and if there is a high degree of trust, and if people are not micro-managed, and if there is time, space, and incentive for people to communicate even when it is not directly in the service of their own deadlines, the design group will tend to “do the right thing” and operate intelligently. To the extent, however, that companies have “cut the fat” and discourage “time-wasting” activities like socializing with co-workers and “saving money” by outsourcing huge chunks of the designing and building process, you will be lucky if the net result is as “intelligent” as the original General Problem Solving system.

Most readers will have experienced exactly this kind of bureaucratic nonsense when encountering a “good employee” who has no power or incentive to do anything but follow a set of rules that they have been warned to follow regardless of the actual result for the customer. At bottom then, the root cause of problems illustrated in chapter ten is not “Artificial Intelligence” or “Robotics” or “Driverless Cars.” The root issue is what might be called “Deep Greed.” The people at the very top of companies squeeze every “spare drop” of productivity from workers thus making choices that are globally intelligent nearly impossible due to a lack of knowledge and lack of incentive. This is combined with what might be called “Deep Hubris” — the idea that all contingencies have been accounted for and that there is no need for feedback, adaptation, or work-arounds.

Here is a simple example that I personally ran into, but readers will surely have many of their own examples. I was filling out an on-line form that asked me to list the universities and colleges I attended. Fair enough, but instead of having me type in the institutions, they designers used a pull-down list! There are somewhere between 4000 and 7500 post high-school institutions in the USA and around 50,000 world wide. The mere fact that the exact number is so hard to pin down should give designers pause. Naturally, for most UIs and most computer users, it is much faster to type in the name than scroll to it. Of course, the list keeps changing too. Moreover, there is ambiguity as to where an item should appear in the alphabetical list. For example, my institution, The University of Michigan, could conceivably be listed as “U of M”, “University of Michigan”, “Michigan”, “The University of Michigan”, or “U of Michigan.” As it turns out, it isn’t listed at all. That’s right. Over 43,000 students were enrolled last year at Michigan and it isn’t even on the list at least so far as I could determine in any way. That might not be so bad, but the form does not allow the user to type in anything. In other words, despite the fact that the category “colleges and universities” is ever-changing, a bit fuzzy, and suffers from naming ambiguity, the designers were so confident of their list being perfect that they saw no need for allowing users to communicate in any way that there was an error in the design. If one tries to communicate “out of band”, one is led to a FAQ page and ultimately a form to fill out. The form presumes that all errors are due to user errors and that all of these user errors are again from a small class of pre-defined errors! That’s right! You guessed it! The “report a problem” form again presumes that every problem that exists in the real world has already been anticipated by the designers. Sigh.

So, to me, the idea that Frank and Katie and Roger would end up as they did does not seem the least bit far-fetched. As I mentioned, the problem is not with “artificial intelligence.” The problem is not even that our society is structured as a hierarchy of greed. In the hierarchy of greed, everyone keeps their place because they are motivated to get just a little more by following the rules they are given from above and keeping everyone below them in line following their rules. It is not a system of involuntary servitude (for most) but a system of voluntary servitude. It seems to the people at each level that they can “do better” in terms of financial rewards or power or prestige by sifting just a little more from those below. To me, this can be likened to the game of Jenga™. In this game, there is a high stack of rectangular blocks. Players take turns removing blocks. At some point, of course, what is left of the tower collapses and one player loses. However, if our society collapses from deep greed combined with deep hubris, everyone loses.

Newell, A.; Shaw, J.C.; Simon, H.A. (1959). Report on a general problem-solving program. Proceedings of the International Conference on Information Processing. pp. 256–264.

J.R. Quinlan & E.B. Hunt (1968). A Formal Deductive Problem-Solving System, Journal of the ACM 10/1968; 15(4):625-646. DOI: 10.1145/321479.321487

Thomas, J.C. (1974). An analysis of behavior in the hobbits-orcs problem. Cognitive Psychology 6 , pp. 257-269.

Turing’s Nightmares

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • America
  • apocalypse
  • COVID-19
  • creativity
  • driverless cars
  • family
  • fiction
  • health
  • management
  • nature
  • poetry
  • politics
  • psychology
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • Veritas
  • Walkabout Diaries

Meta

  • Register
  • Log in

Blog at WordPress.com.

  • Follow Following
    • petersironwood
    • Join 644 other followers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...