• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Monthly Archives: May 2016

Turing’s Nightmares: Chapter 16

25 Wednesday May 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, ethics, the singularity, UX

WHO CAN TELL THE DANCER FROM THE DANCE?

MikeandStatue

Is it the same dance? Look familiar?

 

The title of chapter 16 is a slight paraphrase of the last line of William Butler Yeats poem, Among School Children. The actual last line is: “How can we tell the dancer from the dance?” Both phrasings tend to focus on the interesting problem of trying to separate process from product, personage from their creative works, calling into question whether it is even possible. In any case, the reason I chose this title is to highlight that when it comes to the impact of artificial intelligence (or, indeed, computer systems in general), a lot depends on who the actual developers are: their goals, their values, their constraints and contexts.

In the scenario of chapter 16, the boss (Ruslan) of one of the main developers (Goeffrey) insists on putting in a “back door.” What this means in this particular case is that someone with an axe to grind has a way to ensure that the AI system gives advice that causes people to behave in the best interests of those who have the key to this back door. Here, the implication is that some rich, wealthy oil magnates have “made” the AI system discredit the idea of global warming so as to maximize their short term profits. Of course, this is a work of fiction. In the real world, no-one would conceivably be evil enough to mortgage the human habitability of our planet for even more short term profit — certainly not someone already absurdly wealthy.

In the story, the protagonist, Goeffrey, is rather resentful of having this requirement for a back door laid on him. There is a hint that Geoffrey was hoping that the super-intelligent system would be objective. We can also assume it was added late but no additional time was added to the schedule. We can assume this because software development is seldom a purely rational process. If it were, software would actually work; it would be useful and usable. It would not make you want to smash your laptop against the wall. Geoffrey is also afraid that the added requirement might make the project fail. Anyway, Geoffrey doesn’t take long to hit on the idea that if he can engineer a back door for his bosses, he can add another one for his own uses. At that point, he no longer seems worried about the ethical implications.

There is another important idea in the chapter and it actually has nothing to do with artificial intelligence, per se, though it certainly could be used as a persuasive tool by AI systems. So, rather than have a single super-intelligent being (which people might understandably have doubts about trusting), instead, there are two “Sings” and they argue with each other. These arguments reveal something about the reasoning and facts behind the two positions.Perhaps more importantly, a position is much more believable when “someone” — in this case a super-intelligent someone — .is persuaded by arguments to change their position and “agree” with the other Sing.

The story does not go into the details of how Geoffrey used his own back door into the system to drive a wedge between his boss, Ruslan and Ruslan’s wife. People can be manipulated. Readers should design their own story about how an AI system could work its woe. We may imagine that the AI system has communication with a great many devices, actuators, and sensors in the Internet of Things.

You can obtain Turing’s Nightmares here: Turing’s Nightmares

You can read the “design rationale” for Turing’s Nightmares here: Design Rationale

 

Turing’s Nightmares: Chapter 15

16 Monday May 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, the singularity, Turing, Tutoring

Tutoring Intelligent Systems.

MikeandStatue

Learning by modeling; in this case by modeling something in the real world.

Of course, the title of the chapter is a take off on “Intelligent Tutoring Systems.” John Anderson of CMU developed (at least) a LISP tutor and a geometry tutor. In these systems, the computer is able to infer a “model” of the state of the student’s knowledge and then give instruction and examples that are geared toward the specific gaps or misconceptions that that particular student has. Individual human tutors can be much more effective than classroom instruction and John’s tutor’s were also better than human instruction. At the AI Lab at NYNEX, we worked for a time with John Anderson to develop a COBOL tutor. The tutoring system, called DIME, included a hierarchy of approaches. In addition to an “intelligent tutor”, there was a way for students to communicate with each other and to have a synchronous or asynchronous video chat with a human instructor. (This was described at CHI ’94 and available in the Proceedings; Radlinski, B., Atwood, M., and Villano, M., DIME: Distributed Intelligent Multimedia Education, Proceeding of CHI ’94 Conference Companion on Human Factors in Computing Systems,Pages 15-16 ACM New York, NY, USA ©1994).

The name “Alan” is used in the chapter to reflect some early work by Alan Collins, then at Bolt, Beranek and Newman, who studied and analyzed the dialogues of human tutors tutoring their tutees. It seems as though many AI systems either take the approach of trying to have human experts encode knowledge rather directly or expose them to many examples and let the systems learn on their own. Human beings often learn by being exposed to examples and having a guide, tutor, or coach help them focus, provide modeling, and chose the examples they are exposed to. One could think of IBM’s Watson for Jeopardy as something of a mixed model. Much of the learning was due to the vast texts that were read in and to being exposed to many Jeopardy game questions. But the team also provided a kind of guidance about how to fix problems as they were uncovered.

In chapter 15 of Turing’s Nightmares, we observe an AI system that seems at once brilliant and childish. The extrapolation from what the tutor actually said, presumably to encourage “Sing” to consider other possibilities about John and Alan was put together with another hint about the implications of being differently abled into the idea that there was no necessity for the AI system to limit itself to “human” emotions. Instead, the AI system “designs” emotional states in order to solve problems more effectively and efficiently. Indeed, in the example given, the AI system at first estimates it will take a long time to solve an international crisis. But once the Sing realizes that he can use a tailored set of emotional states for himself and for the humans he needs to communicate with, the problem becomes much simpler and quicker.

Indeed, it does sometimes feel as though people get stuck in some morass of habitual prejudices, in-group narratives, blame-casting, name-calling, etc. and are unable to think their way from their front door to the end of the block. Logically, it seems clear that war never benefits either “side” much (although to be sure, some powerful interests within each side might stand to gain power, money, etc.). One could hope that a really smart AI system might really help people see their way clear to find other solutions to problems.

.

The story ends with a refrain paraphrased from the TV series “West Wing” — “What comes next?” is meant to be reminiscent of “What’s Next?” which President Bartlett uses to focus attention on the next problem. “What comes next?” is also a phrase used in improv theater games; indeed, it is the name of an improv game used to gather suggestions from the audience about how to move the action along. In the context of the chapter, it is meant to convey that the Sing feels no need to bask in the glory of having avoided a war. Instead, it’s on to the next challenge or the next thing to learn. The phrase is also meant to invite the reader to think about what might come next after AI systems are able both to understand and utilize human emotion but also to invent their own emotional states on the fly based on the nature of the problem at hand. Indeed, what comes next?

Turing’s Nightmares: Chapter 14

02 Monday May 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, pets, the singularity, Turing

IMG_6576

 

Dear reader: Spoiler alert: before reading this blog post, you may want to read the associated chapter. You can buy the physical book, Turing’s Nightmares, at this link:

http://tinyurl.com/hz6dg2d

An earlier version of the chapter discussed below can be found at this link:

https://petersironwood.wordpress.com/2015/10/

One of the issues raised by chapter 14 of Turing’s Nightmares is that the scenario presumes that, even in the post-singularity future, there will still be a need for government. In particular, the future envisions individuals as well as a collective. Indeed, the goals of the “collective” will remain somewhat different from the goals of various individuals. Indeed, an argument can be made that the need for complex governmental processes and structures will increase with hyper-intelligence. But that argument will be saved for another time.

This scenario further assumes that advanced AI systems will have emotions and emotional attachments to other complex systems. What is the benefit of having emotional attachments? Some people may feel that emotional attachments are as outdated as the appendix; perhaps they had some function when humans lived in small tribes but now they cause as much difficulty as they confer an advantage. Even if you believe that emotional attachments are great for humans, you still might be puzzled why it could be advantageous for an AI system to have any.

When it comes to people, they vary a lot in their capabilities, habits, etc.. So, one reason emotional attachments “make sense” is to prefer and act in the interest of people who have a range of useful and complementary abilities and habitual behaviors. Wouldn’t you naturally start to like someone who has similar interests, other things being equal? Moreover, as you work with someone else toward a common goal, you begin to understand and learn how to work together better. You learn to trust each other and communicate in short-hand. If you become disconnected from such a person, it can be disconcerting for all sorts of reasons. But exactly the same could hold true for an autonomous agent with artificial intelligence. There could be reasons for having not one ubiquitous type of robot but for having millions of different kinds. Some of these would work well together and having them “bond” and differentially prefer their mutual proximity and interaction.

Humans, of course, also make emotional attachments, sometimes very deep, with animals. Most commonly, people form bonds with cats, dogs, and horses, but people have had a huge variety of pets including birds, turtles, snakes, ferrets, mice, rabbits and even tarantula spiders. What’s up with that? The discussion above about emotional attachment was intentionally “forced” and “cold”, because human attachments cannot be well explained in utilitarian terms. People love others who have no possible way to offer back any value other than their love in return.

In some cases, pets do have some utilitarian value such as catching mice, barking at intruders, or pulling hay wagons. But overwhelmingly, people love their pets because they love their pets! If asked, they may say because they are “cute” or “cuddly” but this doesn’t really answer the question as to why people love pets. According to a review by John Archer published in the 1997 July issue of Human Behavior, “These mechanisms can, in some circumstances, cause pet owners to derive more satisfaction from their pet relationship than those with humans, because they supply a type of unconditional relationship that is usually absent from those with other human beings.”

However, there are also other hypotheses; e.g., Biophilia (1986) Edward O. Wilson

http://www.amazon.com/Biophilia-Edward-Wilson/dp/0674074424 

suggests that during early hominid history, there was a distinct survival advantage to observing and remaining close to other animals living in nature. Would it make more sense to gravitate toward a habitat filled with life…or one utterly devoid of it? While humans and other animals generally want to move toward similar things like fresh water, a food supply, cover, reasonable temperatures, etc. and avoid other things such as dangerous places, temperature extremes etc. this might explain why people like lush and living environments but probably does not explain, in itself, why we actually love our pets.

Perhaps one among many possible reasons is that pets reflect aspects of our most basic natures. In civilization, these aspects are often hidden by social conventions. In effect, we can actually learn about how we ourselves are by observing and interacting with our pets. Among the various reasons why we love our pets, this strikes me as the most likely one to hold true as well for super-AI systems. Of course, they may also like cats and dogs for the same reason, but in the same way that most of us prefer cats and dogs over turtles and spiders because of the complexity and similarity of mammalian behavior, we can imagine that post-singularity AI systems might prefer human pets because we would be more complex and probably, at least initially, share many of the values, prejudices and interests of the AI systems since their initial programming would inevitably reflect humans.

Another premise of chapter 14 is that even with super-intelligent systems, resources will not be infinite. Many dystopian and utopian science fiction works alike seem to assume that in the future space travel, e.g., will be dirt cheap. That might happen. Ignoring any economic scarcity certainly makes writing more convenient. Realistically though, I see no reason why resources will be essentially infinite; that is, so universally cheap that there will no longer be any contention for them. It’s conceivable that some entirely new physical properties of the universe might be discovered by super-intelligent beings so that this will be the new reality. But it is also possible that “super-intelligent beings” might be even more inclined to over-use the resources of the planet than we humans are and that contention for resources will be even more fierce.

Increasing greediness seems at least an equally likely possibility as the alternative; viz., that while it might be true that as humans gained more and more power, they became greedier and greedier and used up more and more resources but only until that magic moment when machines were smarter than people and that at that point, these machines suddenly became interested in actually exhibiting sustainable behavior. Maybe, but why?

Any way, it’s getting late and past time to feed the six cats.

Interested readers who can may want to tune into a podcast tonight, Monday, May 2nd at 7pm PST using the link below. I will be interviewed about robotics, artificial intelligence and human computer interaction.

https://blab.im/nick-rishwain-roboticslive-ep-1-human-computer-interactions-w-john-charles-truthtablejc

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • America
  • apocalypse
  • COVID-19
  • creativity
  • driverless cars
  • family
  • fiction
  • health
  • management
  • nature
  • poetry
  • politics
  • psychology
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • Veritas
  • Walkabout Diaries

Meta

  • Register
  • Log in

Blog at WordPress.com.

  • Follow Following
    • petersironwood
    • Join 645 other followers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...