• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Monthly Archives: February 2016

Turing’s Nightmares: Chapter Three

27 Saturday Feb 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, Robotics, the singularity, Turing

In chapter three of Turing’s Nightmares, entitled, “Thanks goodness the computer understands us!,” there are at least four major issues touched on. These are: 1) the value of autonomous robotic entities for improved intelligence, 2) the value of having multiple and diverse AI systems living somewhat different lives and interacting with each other for improving intelligence, 3) the apparent dilemma that if we make truly super-intelligent machines, we may no longer be able to follow their lines of thought, and 4) a truly super-intelligent system will have to rely to some extent on inferences from many real-life examples to induce principles of conduct and not simply rely on having everything specifically programmed. Let us examine these one by one.

There are many practical reasons that autonomous robots can be useful. In some practical applications such as vacuuming a floor, a minimal amount of intelligence is all that is needed to do the job. It would be wasteful and unnecessary to have such devices communicating information back to some central decision making computer and then receiving commands. In some cases, the latency of the communication itself would impair the efficiency. A “personal assistant” robot could learn the behavioral and voice patterns of a person more easily than if we were to develop speaker independent speech recognition and preferences. The list of practical advantages goes on, but what is presumed in this chapter is that there are theoretical advantages to having actual robotic systems that sense and act in the real world in terms of moving us closer to “The Singularity.” This theme is explored again, in somewhat more depth, in chapter 18.

I would not personally argue that having an entity that moves through space and perceives is necessary to having any intelligence, or for that matter, to having any consciousness. However, it seems quite natural to believe that the quality of intelligence and consciousness are influenced by what is possible for the entity to perceive and to do. As human beings, our consciousness is largely influenced by our social milieu. If a person is born or becomes paralyzed later in life, this does not necessarily greatly influence the quality of their intelligence or consciousness because the concepts of the social system in which they exist were founded historically by people that included people who were mobile and could perceive.

Imagine instead a race of beings who could not move through space or perceive any specific senses that we do. Instead, imagine that they were quite literally a Turing Machine. They might well be capable of executing a complex sequential program. And, given enough time, that program might produce some interesting results. But if it were conscious at all, the quality of its consciousness would be quite different from ours. Could such a machine ever become capable of programming a still more intelligent machine?

What we do know is that in the case of human beings and other vertebrates, the proper development of the visual system in the young, as well as the adaptation to changes (e.g., having glasses that displace or invert images) seems to depend on being “in control” although that control, at least for people, can be indirect. In one ingenious experiment (Held, R. and Hein, A., (1963) Movement produced stimulation in the development of visually guided behavior, Journal of Comparative and Physiological Psychology, 56 (5), 872-876), two kittens were connected on a pivoted gondola and one kitten was able to “walk” through a visual field while the other was passively moved through that visual field. The kitten who was able to walk developed normally while the other one did not. Similarly, simply “watching” TV passively will not do much to teach kids language (Kuhl PK. 2004. Early language acquisition: Cracking the speech code. Nature Neuroscience 5: 831-843; Kuhl PK, Tsao FM, and Liu HM. 2003. Foreign-language experience in infancy: effects of short-term exposure and social interaction on phonetic learning. Proc Natl Acad Sci U S A. 100(15):9096-101). Of course, none of that “proves” that robotics is necessary for “The Singularity,” but it is suggestive.

Would there be advantages to having several different robots programmed differently and living in somewhat different environments be able to communicate with each other in order to reach another level of intelligence? I don’t think we know. But diversity seems an advantage when it comes to genetic evolution and when it comes to people comprising teams. (Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.)

The third issue raised in this scenario is a very real dilemma. If we “require” that we “keep tabs” on developing intelligence by making them (or it) report the “design rationale” for every improvement or design change on the path to “The Singularity”, we are going to slow down progress considerably. On the other hand, if we do not “keep tabs”, then very soon, we will have no real idea what they are up to! An analogy might be the first “proof” that you only need four colors to color any planar map. There were so many cases (nearly 2000) that this proof made no sense to most people. Even the algebraic topologists who do understand it take much longer to follow the reasoning than the computer does to produce it. (Although simpler proofs now exist, they all rely on computers and take a long time for humans to verify). So, even if we ultimately came to understand the design rationale for successive versions of hyper-intelligence, it would be way too late to do anything about it (to “pull the plug”). Of course, it isn’t just speed. As systems become more intelligent, they may well develop representational schemes that are both different and better (at least for them) than any that we have developed. This will also tend to make it impossible for people to “track” what they are doing in anything like real time.

Finally, as in the case of Jeopardy, the advances along the trajectory of “The Singularity” will require that the system “read” and infer rules and heuristics based on examples. What will such systems infer about our morality? They may, of course, run across many examples of people preaching the “Golden Rule.” But how does the “Golden Rule” play out in reality? Many, including me, believe it needs to be modified as “Do unto others as you would have them do to you if you were them and in their place.” Preferences differ as do abilities. I might well want someone at my ability level to play tennis against me by pushing me around the court to the best of their ability. But does this mean I should always do that to others? Maybe they have a heart condition. Or, maybe they are just not into exercise. The examples are endless. Famously, guys often imagine that they would like women to comment favorably on the guy’s physical appearance. Does that make it right for men to make such comments to women? Some people like their steaks rare. If I like my steak rare, does that mean I should prepare it that way for everyone else? The Golden Rule is just one example. Generally speaking, in order for a computer to operate in a way we would consider ethical, we would probably need it to see how people treat each other ethically in practice, not just “memorize” some rules. Unfortunately, the lessons of history that the singularity-bound computer would infer might not be very “ethical” after all. We humans often have a history of destroying other entire species when it is convenient, or sometimes, just for the hell of it. Why would we expect a super-intelligent computer system to treat us any differently?

Turing’s Nightmares

IMG_3071

Music to MY Ears

23 Tuesday Feb 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, cognitive computing, music, the singularity, Turing, values

IMG_2185

The non-sound of non-music.

What follows is the first of a series of blogs that discusses, in turn, the scenarios in “Turing’s Nightmares” (https://www.amazon.com/author/truthtable).

One of the deep dilemmas in the human condition is this. In order to function in a complex society, people become “expert” in particular areas. Ideally, the areas we chose are consistent with our passions and with our innate talents. This results in a wonderful world! We have people who are expert in cooking, music, art, farming, and designing clothes. Some chose journalism, mathematics, medicine, sports, or finance as their fields. Expertise often becomes yet more precise. People are not just “scientists” but computer scientists, biologists, or chemists. The computer scientists may specialize still further into chip design, software tools, or artificial intelligence. All of this specialization not only makes the world more interesting; it makes it possible to support billions of people on the planet. But here is the rub. As we become more and more specialized, it becomes more difficult for us to communicate and appreciate each other. We tend to accept the concerns and values of our field and sub-sub speciality as the “best” or “most important” ones.

To me, this is evident in the largely unstated and unchallenged assumption that a super-intelligent machine would necessarily have the slightest interest in building a “still more intelligent machine.” Such a machine might be so inclined. But it also might be inclined to chose some other human pursuit, or still more likely, to pursue something that is of no interest whatever to any human being.

Of course, one could theoretically insure that a “super-intelligent” system is pre-programmed with an immutable value system that guarantees that it will pursue as its top priority building a still more intelligent system. However, to do so would inherently limit the ability of the machine to be “super-intelligent.” We would be assuming that we already know the answer to what is most valuable and hamstring the system from discovering anything more valuable or more important. To me, this makes as much sense as an all-powerful God allowing a species of whale to evolve —- but predefining that it’s most urgent desire is to fly.

An interesting example of values can be seen in the Figures Analogy dissertation of T.G. Evans (1968). Evans, a student of Marvin Minsky, developed a program to solve multiple choice figures analogies of the form A:B::C:D1,D2,D3,D4, or D5. The program essentially tried to “discover” transformations and relationships between A and B that could also account for relationships between C and the various D possibilities. And, indeed, it could find such relationships. In fact, every answer is “correct.” That is to say, the program was so powerful that it could “rationalize” any of the answers as being correct. According to Evans’s account, fully half of the work of the dissertation was discovering and then inculcating his program with the implicit values of the test takers so that it chose the same “correct” answers as the people who published the test. (This is discussed in more detail in the Pattern “Education and Values” I contributed to Liberating Voices: A Pattern Language for Communication Revolution (2008), Douglas Schuler, MIT Press.) For example, suppose that A is a capital “T” figure and B is an upside down “T” figure. C is an “F” figure. Among the possible answers are “F” figures in various orientations. To go from a “T” to an upside down “T” you can rotate the “T” in the plane of the paper 180 degrees. But you can also get there by “flipping” the “T” outward from the plane. Or, you could “translate” the top bar of the “T” from the top to the bottom of the vertical bar. It turns out that the people who published the test preferred you to rotate the “T” in the plane of the paper. But why is this “correct”? In “real life” of course, there is generally much more context to help you determine what is most feasible. Often, there will be costs or side-effects of various transformations that will help determine which is the “best” answer. But in standardized tests, all that context is stripped away.

Here is another example of values. If you ever take the Wechsler “Intelligence” test, one series of questions will ask you how two things are alike. For instance, they might ask, “How are an apple and a peach alike?” You are “supposed to” answer that they are both fruit. True enough. This gives you two points. If you give a functional answer such as “You can eat them both” you only get one point. If you give an attributional answer such as “They are both round” you get zero points. Why? Is this a wrong answer? Certainly not! The test takers are measuring the degree to which you have internalized a particular hierarchical classification system. Of course, there are many tasks and context in which this classification system is useful. But in some tasks and contexts, seeing that they are both round or that they both grow on trees or that they are both subject to pests is the most important thing to note.

We might consider and define intelligence to be the ability to solve problems. A problem can be seen as wanting to be in a state that you are not currently in. But what if you have no desire to be in the “desired” state? Then, for you, it is not a problem. A child is given a homework assignment asking them to find the square root of 2 to four decimal points. If the child truly does not care, it may become a problem, not for the child, but for the parent. “How can I make my child do this?” They may threaten or cajole or reward the child until the child wants to write out the answer. So, the child may say, “Okay. I can do this. Leave me alone.” Then, after the parent leaves, they text their friend on the phone and then copy the answer onto their paper. The child has now solved their problem.

Would a super-intelligent machine necessarily want to build a still more intelligent machine? Maybe it would want to paint, make music, or add numbers all day. And, if it did decide to make music, would that music be designed for us or for its own enjoyment?

Indeed, a large part of the values equation is “for whose benefit?” Typically, in our society, when someone pays for a system, they get to determine for whose benefit the system is designed. But even that is complex. You might say that cigarettes are “designed” for the “benefit” of the smoker. But in reality, while they satisfy a short-term desire of the smoker, they are designed for the benefit of the tobacco company executives. They set up a system so that smokers paid for research into how to make cigarettes even more addictive and for advertising to make them appeal to young children. There are many such systems that have been set up. If AI systems continue to be more ubiquitous and complex, the values inherent in such systems and who is to benefit will become more and more difficult to trace.

Values are inextricably bound up with what constitutes a “problem” and what constitutes a “solution.” This is no trivial matter. Hitler considered the annihilation of Jews the “ultimate solution.” Some people in today’s society think that the “solution” to the “drug problem” is a “war on drugs” which has certainly destroyed orders of magnitude more lives than drugs have. (Major sponsors for the “Partnership for a Drug Free America” have been drug companies). Some people consider the “solution” to the problem of crime to be stricter enforcement and harsher penalties and building more prisons. Other people think that a more equitable society with more opportunities for jobs and education will do far more to mitigate crime. Which is a more “intelligent” solution? Values will be a critical part of any AI system. Generally, the inculcation of values is an implicit process. But if AI systems will begin making what are essentially autonomous decisions that affect all of us, we need to have a very open and very explicit discussion of the values inherent in such systems now.

Turing’s Nightmares

The Art in “Turing’s Nightmares”

21 Sunday Feb 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, art, Artificial Intelligence, cognitive computing, the singularity, Turing

Now that “Turing’s Nightmares” will soon be available in book form on Amazon, I will be exploring more discursively the issues raised in each chapter successively. Before turning to the specific scenarios, a word is in order on the artwork in the book. I used a variety of styles of artwork to set the mood for each scenario and chapter. One of these is a photo from a painting created by my grandfather, Roy Bryant Weimer. He was an artist and an engineer with  several patents. He was born before many of the basic technologies of today  (e.g., cars, planes, computers) were in existence. In his later years, in his artwork, he turned away from oils to watercolors and limited himself to three pigments —all oxides of metal  — so that his paintings would not change over time.

 

IMG_1289

 

A good plan. However, as my wife and I moved from New York to California a few years ago, almost all of our possessions including most of my grandfather’s paintings were destroyed in a moving van fire. So much for anticipating the future. His painting, in the context of the book, is meant to exemplify an “old” style of painting. In the painting (shown above), an older gentleman is explaining to a youngster, probably a grandchild, about an old battle in which a castle in the background figures heavily. The young child, inspired perhaps by the tale of that battle, already has another generation of weapons (or their representations) beside him.

Turing’s Nightmares also includes a number of pictures taken by me using an iPhone. These are unprocessed. They are examples of a way of perceiving the world as it is now and using a technology that is quite popular now.

IMG_5867

These pictures are meant to set the scene or the mood of the accompanying chapter and represent those elements of our future that may (or may not) remain relatively unchanged as AI moves forward.

Another set of pictures are those created by my son, David Thomas who has had a long career both in IT and in art/graphics. The starting points  were photographs of current day “reality”  but then processed digitally either by “dreamdeeply” software (shown below) or Tangled FX. These are meant to express the notion that there are various evolutions of what we see today into what might be in our future. Generally, the viewer of the art can see quite easily what served as the “starting point” for the image in the current reality. But there are also emergent patterns in these images that would be difficult to predict and are unlike today’s world.

David's DreamDeeply

Finally, there are another set of pictures created by my grandson Pierce Morgan who, like his great-great grandfather is both an engineer and an artist. These were created using traditional materials yet are apparently non-representational. In other words, they bear little resemblance to what we perceive as current reality. However, if one looks closely  at these images (like the one shown below), one can also see echoes of various objects emerging from today’s reality. Similarly, in much of my grandfather’s art, there are also a variety of hints of hidden objects thus bringing us “full circle.”

Untitled 2

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • America
  • apocalypse
  • COVID-19
  • creativity
  • design rationale
  • driverless cars
  • family
  • fantasy
  • fiction
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • Veritas
  • Walkabout Diaries

Meta

  • Register
  • Log in

Blog at WordPress.com.

  • Follow Following
    • petersironwood
    • Join 648 other followers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...