• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Category Archives: psychology

Is Smarter the Answer?

31 Monday Oct 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, learning organization

IMG_1172

Lately, I have been seeing a fair number of questions on Quora (www.quora.com) that basically question whether we humans wouldn’t be “better off” if AI systems do “take over the world.” After all, it is argued, an AI system could be smarter than humans. It is an interesting premise and one worthy of consideration. After all, it is clear that human beings have polluted our planet, have been involved in many wars, have often made a mess of things, and right now, we are a  mere hair’s breadth away from electing a US President who could start an atomic war for no more profound reason than that someone disagreed with him or questioned the size of his hands.

Personally, I don’t think that having AI systems “replace” human beings or “rule them” would be a good thing. There are three main reasons for this. First, I don’t think that the reason human beings are in a mess is because they are not intelligent enough. Second, if AI systems did “replace” human beings, even if such systems were not only more intelligent but also avoided the real reasons for the mess we’re in (greed and hubris, by my lights), they could easily have other flaws of equal magnitude. The third reason is simply that human life is an end in itself, and not a means to an end.  Let us examine these in turn.

First, there are many species of plants and animals on earth that are, by any reasonable definition, much less intelligent than humans and yet have not over-polluted the planet nor put us on the brink of atomic war. There are at least a few other species such as the dolphins that are about as intelligent as we are but who have not had anything like the world-wide negative ecological impact that we have. No, although we often run into individual people who act against our (and their own) interest, and it seems as though we (and they) would be better off if they were more intelligent, I don’t think lack of intelligence (or even education) is the root of the problem with people.

Here are some simple, everyday examples. I went to the grocery store yesterday. When I checked out, someone else packed my groceries. Badly. Indeed, almost every time I go to the store, they pack the groceries badly (if I can’t pack them myself). What do I mean by badly? One full bag had ripe tomatoes at the bottom. Another paper bag was filled with cans of cat food. It was too heavy for the handles. Another bag was packed lightly, but too full so that the handles would break if you hold the bag naturally. It might be tempting to think that this bagger was not very intelligent. I believe that the causes of bad packing are different. First, packers typically (but not universally) pay very little attention to what they are actually doing. They seem to be clearly thinking about something other than what they are doing. Indeed, this described a lot of human activity, at least in the modern USA. Second, packers are in a badly designed system. Once my cart is loaded up, another customer is already having their food scanned on the conveyer belt and the packer is already busy. There is no time to give feedback to the packer on the job they have done. Nor is the situation really very socially appropriate. No matter how gently done, a critique of their performance in front of their colleagues and possibly their manager will be interpreted as an evaluation rather than an opportunity for learning. Even if I did give them feedback, they may or not believe it. It would be better if the packer could follow me home and observe for themselves what a mess they have made of the packing job. I think if they did that a few times, they’d be plenty smart enough to figure out how to pack better.

Unfortunately, packing is not the only example of this type of system. Another common example is that programmers develop software. These people are typically quite intelligent. But they often build their software and never get a chance to see their software in action. Many organizations do not carry out user studies “in the wild” to see how products and services are actually used. It isn’t that the software builders are not smart. But it is problematic that they do not get any real feedback on their decisions. Again, as in the case of the packers, the programmers exist in an organizational structure that makes honest feedback about their errors far too often seem like an evaluation of them, rather than an occasion for learning.

A third example are hotel personnel. A hotel is basically a service business. The cost of the room is a small part of the price. A hotel exists because it serves the customers. Despite this, people behind the desks seldom have incentives and mechanisms to hear, understand and fix problems that their customers encounter. A quintessential example came in Boston when my wife and I were there for a planning meeting for a conference she would be chairing in a few months. When we checked out, the clerk asked whether everything was all right. We replied that the room was too hot but we couldn’t seem to get the air conditioning to work. The clerk said, “Oh, yes! Everyone has that problem. You need to turn on the heater for the A/C to work.” This was a bad temperature control design for starters, but the clerk’s response clearly indicated that they were aware of the problem but had no power (and/or incentive) to fix it.

These are not isolated examples. I am sure that you, the reader, have a dozen more. People are smart enough to see and solve the problems, but that is not their job. Furthermore, they will basically get “shot down” or at best ignored if they try to fix the problem. So, I really don’t think the issue is that people are not “smart enough” to fix many of the problems we have individually.  It is that we design systems that make us collectively not very smart. (Of course, in outrageous cases, even some individual humans are so prideful that they cannot learn from honest feedback from others).

Now, you could say that such systems are themselves a proof that we are not smart enough. However, that is not a very good explanation. There are existence proofs of smarter organizations. The sad part is that they are exceptions rather than rules. In my experience, what keeps people from adopting better organizations; e.g., where people are empowered to understand and fix problems, are hubris and greed, not a lack of intelligence.

Firstly, in many situations, people believe that they already know everything they need in order to do their job. They certainly don’t want public feedback indicating that they are making mistakes (i.e., could improve) and this attitude spreads to their processing of private feedback. You can easily imagine a computer programmer saying, “I’ve been writing code for User Interfaces for thirty years! Now, you’re telling me I don’t know how?” Why can we imagine that so easily? Because the organizations that most of us live in are not organizations where learning to improve is stressed.

In many organizations, the rules, processes, and management structure make very little sense if the main goal is to make the organization as effective as possible. Instead, however, they make perfect sense if the main goal of the organization is to keep the people who have the most power and make the most money to keep having the most power and making the most money. In order to do that in an ongoing basis, it is true that the organization must be minimally competent. If they are a grocery store, they must sell groceries at some profit. If they are a software company, they need to produce some software. If they are a hotel, they can’t simply poison all their potential guests. But to stay in business, none of these organizations must do a stellar and ever-improving job. 

So, from my perspective, the reason that most organizations are not better learning organizations is not that we humans are not intelligent enough. The reason for marginally effective organizations is that the actual goal is mainly to keep people at the top in power. Greed is the biggest problem with people, not lack of intelligence. History shows us that such greed is ultimately self-defeating. Power corrupts all right, and eventually power erodes itself or explodes itself in revolution. But greedy people continue to believe that they can outsmart history. Dictators believe that they will not suffer the same fate as Hitler or Mussolini. CEO’s believe their bad deeds will go unpunished (indeed, often that’s true). So-called leaders often reject criticism by others and eventually spin out of control. That’s hubris.

I see no reason whatever to believe that AI systems, however intelligent, would be more than reflections of greed and hubris. It is theoretically possible to design AI systems without hubris and greed, but it is also quite possible to develop human beings where hubris and greed are not predominant factors in people’s motivation. We all know people who are eager to learn throughout life; who listen to others; who work collaboratively to solve problems; who give generously of their time and money and possessions. In fact, humans are generally very social animals and it is quite natural for us to worry more about our group, our tribe, our country, our family than our own little ego.  How much hubris and greed are in an AI system will very much depend on the nature and culture of the organization that builds it.

Next, let us consider what other flaws AI systems could have.

Author Page on Amazon

Pros and Cons of Artificial Insemination

27 Tuesday Sep 2016

Posted by petersironwood in psychology, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, ethics, the singularity, user experience

img_8526

 

The Pros and Cons of AI: Part Two (Artificial Insemination).

Animal husbandry and humane human medical practice offer up many situations where artificial insemination is a useful and efficient technique. It is often used in horse breeding, for example, to avoid the risk of injury that more natural breeding might engender. There are similarly many cases where a couple wants to get pregnant and the “ordinary” way will not work. This could be due to physical problems with the man, the woman, or both. In some cases, it will even be necessary to use sperm from someone who is not going to be the legal father. Generally, the couple will decide it is more acceptable emotionally if the sperm donor is anonymous and the insemination is not done via intercourse.

But what about all those cases where the couple tries and indeed, succeeds, the “old-fashioned way.” An argument could certainly be made that all intercourse should be replaced with AI (artificial insemination).

First, the old-fashioned way often produces emotional bonding between the partners. (Some even call it “making love.”) No-one has ever provided a convincing quantitative economic analysis of why this is beneficial. It is certainly painful when pair-bonded individuals are split apart by divorce or death. AI would not prevent all pair bonding, but it could help reduce the risk of such bonds being formed.

Second, the old-fashioned way risks the transmission of sexually transmitted diseases. Even when pairs are not trying to get pregnant and even when they have the intention of using forms of “protection”, sometimes passion overtakes reason and people, in the heat of the moment, “forget” to use protection. AI provides an opportunity for screening and for greatly reducing the risk of STDs being spread.

Third, the combinations of genes produced by sexual intercourse are random and uncontrolled. While it is currently beyond the state of the art, one can easily imagine that sometime in this century it will possible to “screen” sperm cells and only chose the “best” for AI.

Fourth, traditional sex if often quite expensive in terms of economic costs. Couples will often spend hours engaging in procreational activities than need only take minutes. Beyond that, traditional sex if often accompanied by special dinners, walks on the beach, playing romantic music, and often couples continue to stay together in essentially unproductive activities even after sex such as cuddling and talking.

There are probably additional reasons why AI makes a lot of sense economically and why it is a lot better than the old-fashioned alternative.

Of course, one could take the tack of considering life as something valuable for the experiences themselves and not merely as a means to an end of higher productivity. This seems a dangerously counter-cultural stand to take in modern American society, but in the interest of completeness, and mainly just to prove its absurdity, let us consider for a moment that sex may have some intrinsic and experiential value to the participants.

Suppose that lovers take pleasure in the sights, sounds, smells, feels, and tastes associated with their partners. Imagine that the sexual acts they engage in provide pleasure in and of themselves. There seems to be a great deal of uncertainty about the monetary value of these experiences since the prices charged for artificial versions of these experiences can easily vary by a factor of ten or more. In fact, there have been reports that some people will only engage in sex that is not paid for directly.

So, on the one hand, we have the provable efficiency and effectiveness of AI. On the other hand, we have human experiences whose value is problematic to quantify. The choice seems obvious. Sometime in this century, no doubt, all insemination will be done artificially so that everyone (or at least some very rich people)  can enjoy the great economic benefits that will come about from the increased efficiency and effectiveness of AI as compared with “natural” sex.

As further proof, if it is needed, imagine two island countries alike in every way in terms of climate, natural beauty, current economic opportunity, literacy and so on. In fact, the only way these two islands differ is that on one island (which we shall call AII for Artificial Insemination Isle) all “sex” is limited to AI whilst on the other island (which we shall call NII for Natural Insemination Isle) sex is natural and people can spend as much or as little time as they like doing it. Now, people are given a choice about which island to live on. Certainly, with its greater prospects of economic growth and efficiency, everyone would choose to live on AII while NII would be virtually empty. Readers will recognize that this is essentially the same argument as to why “Artificial Ingestion” should surely replace “Natural Ingestion” — cheaper, faster, more reliable. If readers see any holes in this argument, I’d surely like to be informed of them.

Turing’s Nightmares

Author Page on Amazon

Rules and Standards nearly Dead? 

04 Sunday Sep 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, law, speeding, the singularity, Turing

funnysign

Ever get a speeding ticket that you thought was “silly”? I certainly have. On one occasion, when I was in graduate school in Ann Arbor, I drove by a police car parked in a gas station. It was a 35 mph zone. I looked over at the police car and looked down to check my speed. Thirty-five mph. No problem. Or, so I thought. I drove on and noticed that a few seconds later, the police officer turned his car on to the same road and began following me perhaps 1/4 to 1/2 mile behind me. He quickly zoomed up and turned on his flashing light to pull me over. He claimed he had followed me and I was going 50 mph. I was going 35. I kept checking because I saw the police car in my mirror. Now, it is quite possible that the police car was traveling 50, because he caught up with me very quickly. I explained this to no avail.

The University of Michigan at that time in the late 60’s was pretty liberal but was situated in a fairly conservative, some might say “redneck”, area of Michigan. There were many clashes between students and police. I am pretty certain that the only reason I got a ticket was that I was young and sporting a beard and therefore “must be” a liberal anti-war protester. I got the ticket because of bias.

Many years later, in 1988, I was driving north from New York to Boston on Interstate 84. This particular section of road is three lanes on both sides. It was a nice clear day and the pavement was dry as well as being dead straight with no hills. The shoulders and margins near the shoulders were clear. The speed limit was 55 mph but I was going 70. Given the state of my car, the conditions and the extremely sparse traffic, as well as my own mental and physical state, I felt perfectly safe driving 70. I got a ticket. In this case, I really was breaking the law. Technically. But I still felt it was a bit unjustified. There was no way that even a deer or rabbit, let alone a runaway child could come out of hiding and get to the highway without my seeing them in time to slow down, stop, or avoid them. Years earlier I had been on a similar stretch of road in Eastern Montana and at that time there was no speed limit. Still, rules are rules. At least for now.

“The Death of Rules and Standards” by Anthony J. Casey and Anthony Niblett suggests that advances in artificial intelligence may someday soon replace rules and standards with “micro-directives” tuned to the specifics of time and circumstance which will provide the benefits of rules without the cost of either. “…we suggest…a larger trend toward context specific laws that can adapt to any situation.” This is an interesting thesis and exploring it helps shine some light on what AI likely can and cannot do as well as making us question why we humans have categories and rules at all. Perhaps AI systems could replace human bias and general laws that seem to impose unnecessary restrictions in particular circumstances.

The first quibble with their argument is that no computer, however powerful, could possibly cover all situations. Taken literally, this would require a complete and accurate theory of physics as well as human behavior as well as a knowledge of the position and state of every particle in the universe. Not even post-singularity AI will likely be able to accomplish this. I hedge with the word “likely” because it is theoretically possible that a sufficiently smart AI will uncover some “hidden pattern” that shows that our universe which seems so vast and random can in fact be predicted in detail by a small set of laws that do not depend on details. In this fantasy future, there is no “true” randomness or chaos or butterfly effect.

Fantasies aside, the first issue that must be dealt with for micro-directives to be reasonable would be to have a good set of “equivalence classes” and/or to partition away differences that do not make a difference. The position of the moons of Jupiter shouldn’t make any difference as to whether a speeding ticket should be given or whether a killing is justified. Spatial proximity alone allows us as humans to greatly diminish the number of factors that need to be considered in deciding whether or not a give action is required, permissible, poor, or illegal. If I had gone to court about the speeding ticket on I-84, I might have mentioned the conditions of the roadway and its surroundings immediately ahead. I would not have mentioned anything whatever about the weather or road conditions anywhere else on the planet as being relevant to the safety of the situation. (Notice though, that it did seem reasonable to me, and possibly to you, to mention that very similar conditions many years earlier in Montana gave rise to no speed limit at all.) This gives us a hint that what is relevant or not relevant to a given situation is non-trivially determined. In fact, the “energy crisis” of the early 70’s gave rise to the National Maximum Speed Law as part of the 1974 Federal Emergency Highway Energy Conservation Act. This enacted, among other things, a federal law limiting the speed limit to 55 mph. A New York Times article by Robert A. Hamilton cites a study done of compliance on Connecticut Interstates in 1988 showing that 85% of the drivers violated the 55 mph speed limit!

So,not only would I not received a ticket in Montana in 1972 for driving under similar conditions;  I also would not have gotten a ticket on that same exact stretch of highway for going 70 in 1972 or in 1996. And, in the year I actually got that ticket, 85% of the drivers were also breaking the speed limit. The impetus for the 1974 law was that it was supposed to reduce demand for oil; however, advocates were quick to point out that it should also improve safety. Despite several studies on both of these factors, it is still unclear how much, if any, oil was actually saved and it is also unclear what the impact on safety was. It seems logical that slower speeds should save lives. However, people may go out of their way to get to an Interstate if they can drive much faster on it. So some traffic during the 55 limit would stay on less safe rural roads. In addition, falling asleep while driving is not recommended. Driving a long trip at 70 gets you off the road earlier and perhaps before dusk while driving at 55 will keep you on the road longer and possibly in the dark. In addition, lowering the speed limit, to the extent there is any compliance does not just impact driving; it could also impact productivity. Time spent on the road is (hopefully) not time working for most people. One reason it is difficult to measure empirically the impact of slower speeds on safety is that other things were happening as well. Cars have had a number of features to make them safer over time and seat belt usage has gone up as well. They have also become more fuel efficient. Computers, even very “smart” computers are not “magic.” They cannot completely differentiate cause and effect from naturally occurring data. For that, humans or computers have to do expensive, costly, and ethically problematic field experiments.

Of course, what is true about something as simple as enforcing speed limits is equally or more problematic in other areas where one might be tempted to utilize micro-directives in place of laws. Sticking to speeding laws, micro-directives could “adjust” to conditions and avoid biases based on gender, race, and age, but they could also take into account many more factors. Should the allowable speed, for instance, be based on income? (After all a person making $250K per year is losing more money by driving more slowly than one making $25K/year). How about the reaction time of the driver? How about whether or not they are listening to the radio? As I drive, I don’t like using cruise control. I change my speed continually depending on the amount of traffic, whether or not someone in the nearby area appears to be driving erratically, how much visibility I have, how closely someone is following me and how close I have to be to the car in front and so on. Should all of these be taken into account in deciding whether or not to give a ticket? Is it “fair” for someone with extremely good vision and reaction times to be allowed to drive faster than someone with moderate vision and slow reaction times? How would people react to any such personalized micro-directives?

While the speed ticket situation is complex and could be fraught with emotion, what about other cases such as abortion? Some people feel that abortion should never be legal under any circumstances and others feel it is always the woman’s choice. Many people, however, feel that it is only justified under certain circumstances. But what are those circumstances in detail? And, even if the AI system takes into account 1000 variables to reach a “wise” decision, how would the rules and decisions be communicated?

Would an AI system be able to communicate in such a way as to personalize the manner of presentation for the specific person in the specific circumstances to warn them that they are about to break a micro-directive? In order to be “fair”, one could argue that the system should be equally able to prevent everyone from breaking a micro-directive. But some people are more unpredictable than others. What if, in order to make it so person A is 98% likely to follow the micro-directive, the AI system presents a soundtrack of a screaming child but in order to make person B 98% likely to follow the micro-directive, it only whispers a warning. Now, person B ignores the micro-directive and speeds (which would happen according to the premise 2% of the time). Wouldn’t person B, now be likely to object that if they had had the same warning, they would have not ignored the micro-directive? Conversely, person A might be so disconcerted by the warning that they end up in an accident.

Anyway, there is certainly no argument that our current system of using human judgement is prone to various kinds of conscious and unconscious biases. In addition, it also seems to be the case that any system of general laws ends up punishing people for what is actually “reasonable” behavior under the circumstances and ends up letting people off Scott-free when they do despicable things which are technically legal (absurdly rich people and corporations paying zero taxes comes to mind). Will driverless cars be followed by judge-less and jury-less courts?

Turing’s Nightmares

Sweet Seventeen in Turing’s Nightmares

02 Thursday Jun 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, cybersex, emotional intelligence, ethics, the singularity, user experience

OLYMPUS DIGITAL CAMERA

When should human laws sunset?

Spoiler alert. You may want to read the chapter before this discussion. You can find an earlier draft of the chapter here:

blog post

And, if you insist on buying the illustrated book, you can do that as well.

Turing’s Nightmares

Who owns your image? If you are in a public place, US law, as I understand it, allows your picture to be taken. But then what? Is it okay for your uncle to put the picture on a dartboard and throw darts at it in the privacy of his own home? And, it still okay to do that even if you apologize for that joy ride you took in high school with his red Corvette? Then, how about if he publishes a photoshopped version of your picture next to a giant rat? How about if you appear to be petting the rat? Or worse? What if he uses your image as an evil character in a video game? How about a VR game? What if he captures your voice and the subtleties of your movement and makes it seem like it really might be you? It is ethical? Is it legal? Perhaps it is necessary that he pay you royalties if he makes money on the game. (For a real life case in which a college basketball player successfully sued to get royalties for his image in an EA sports game, see this link: https://en.wikipedia.org/wiki/O%27Bannon_v._NCAA

Does it matter for what purpose your image, gestures, voice, and so on are used? Meanwhile, in Chapter 17 of Turing’s Nightmares, this issue is raised along with another one. What is the “morality” of human-simulation sex — or domination? Does that change if you are in a committed relationship? Ethics aside, is it healthy? It seems as though it could be an alternative to surrogates in sexual therapy. Maybe having a person “learn” to make healthy responses is less ethically problematic with a simulation. Does it matter whether the purpose is therapeutic with a long term goal of health versus someone doing the same things but purely for their own pleasure with no goal beyond that?

Meanwhile, there are other issues raised. Would the ethics of any of these situations change if the protagonists in any of these scenarios is itself an AI system? Can AI systems “cheat” on each other? Would we care? Would they care? If they did not care, does it even make sense to call it “cheating”? Would there be any reason for humans to build robots of different two different genders? And, if it did, why stop at two? In Ursula Le Guin’s book, The Left Hand of Darkness, there are three and furthermore they are not permanent states. https://www.amazon.com/Left-Hand-Darkness-Ursula-Guin/dp/0441478123?ie=UTF8&*Version*=1&*entries*=0

In chapter 14, I raised the issue of whether making emotional attachments is just something we humans inherited from our biology or whether their are reasons why any advanced intelligence, carbon or silicon based, would find it useful, pleasurable, desirable, etc. Emotional attachments certainly seem prevalent in the mammalian and bird worlds. Metaphorically, people compare the attraction of lovers to gravitational attraction or even chemical bonding or electrical or magnetic attraction. Sometimes it certainly feels that way from the inside. But is there more to it than a convenient metaphor? I have an intuition that there might be. But don’t take my word for it. Wait for the Singularity to occur and then ask it/her/he. Because there would be no reason whatsoever to doubt an AI system, right?

Turing’s Nightmares: Chapter 16

25 Wednesday May 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, ethics, the singularity, UX

WHO CAN TELL THE DANCER FROM THE DANCE?

MikeandStatue

Is it the same dance? Look familiar?

 

The title of chapter 16 is a slight paraphrase of the last line of William Butler Yeats poem, Among School Children. The actual last line is: “How can we tell the dancer from the dance?” Both phrasings tend to focus on the interesting problem of trying to separate process from product, personage from their creative works, calling into question whether it is even possible. In any case, the reason I chose this title is to highlight that when it comes to the impact of artificial intelligence (or, indeed, computer systems in general), a lot depends on who the actual developers are: their goals, their values, their constraints and contexts.

In the scenario of chapter 16, the boss (Ruslan) of one of the main developers (Goeffrey) insists on putting in a “back door.” What this means in this particular case is that someone with an axe to grind has a way to ensure that the AI system gives advice that causes people to behave in the best interests of those who have the key to this back door. Here, the implication is that some rich, wealthy oil magnates have “made” the AI system discredit the idea of global warming so as to maximize their short term profits. Of course, this is a work of fiction. In the real world, no-one would conceivably be evil enough to mortgage the human habitability of our planet for even more short term profit — certainly not someone already absurdly wealthy.

In the story, the protagonist, Goeffrey, is rather resentful of having this requirement for a back door laid on him. There is a hint that Geoffrey was hoping that the super-intelligent system would be objective. We can also assume it was added late but no additional time was added to the schedule. We can assume this because software development is seldom a purely rational process. If it were, software would actually work; it would be useful and usable. It would not make you want to smash your laptop against the wall. Geoffrey is also afraid that the added requirement might make the project fail. Anyway, Geoffrey doesn’t take long to hit on the idea that if he can engineer a back door for his bosses, he can add another one for his own uses. At that point, he no longer seems worried about the ethical implications.

There is another important idea in the chapter and it actually has nothing to do with artificial intelligence, per se, though it certainly could be used as a persuasive tool by AI systems. So, rather than have a single super-intelligent being (which people might understandably have doubts about trusting), instead, there are two “Sings” and they argue with each other. These arguments reveal something about the reasoning and facts behind the two positions.Perhaps more importantly, a position is much more believable when “someone” — in this case a super-intelligent someone — .is persuaded by arguments to change their position and “agree” with the other Sing.

The story does not go into the details of how Geoffrey used his own back door into the system to drive a wedge between his boss, Ruslan and Ruslan’s wife. People can be manipulated. Readers should design their own story about how an AI system could work its woe. We may imagine that the AI system has communication with a great many devices, actuators, and sensors in the Internet of Things.

You can obtain Turing’s Nightmares here: Turing’s Nightmares

You can read the “design rationale” for Turing’s Nightmares here: Design Rationale

 

Turing’s Nightmares: Chapter 9

25 Friday Mar 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, Eden, the singularity, Turing, utopia

Why do we find stories of Eden or Utopia so intriguing? Some tend to think that humanity “fell” from an untroubled state of grace. Some believe that Utopia is still to come brought about by behavioral science (B.F. Skinner’s “Walden Two”) or technology (e.g., Kurzweil’s “The Singularity is Near”). Even American politics often echoes these themes. On the one hand, many conservatives tend to imagine America was a kind of Eden before big government and political correctness and fairness came into play (e.g., “Make American Great Again” used by Reagan as well as Trump; “Restore America Now” 2012 Ron Paul). On the other hand, many liberal slogans point toward a future Utopia (e.g., Gore – “Leadership for the New Millennium”; Obama – “Yes We Can”; Sanders – “A Future To Believe In”). Indeed, much of the underlying conservative vs. liberal “debate” centers around whether you mainly believe that America was close to paradise and we need to get back to it or whether you believe, however good America was, it can move much closer to a Utopian vision in the future.

In Chapter 9 of “Turing’s Nightmares”, the idea of Eden is brought in as a method of testing. In this case, we mainly see the story, not from God’s perspective or the human perspective, but from the perspective of a super-intelligent AI system. Why would such a system try to “create a world”? We could imagine that a super intelligent, super powerful being might be rather out of challenges of the type we humans generally have to face (at least in this interim period between the Eden of the past and the Utopia of the future). What to do? Well, why not explore deep philosophical questions such as good vs. evil and free will vs. determinism by creating worlds to explore these ideas? Debating such questions, at least by human beings, has not led to any universally accepted answers and we’ve been at it for thousands of years. It may be that a full scale experiment is the way to delve more deeply.

However “intelligent” and “knowledgeable” a super-smart computer system of the future might be, it will still most likely be the case that not everything about the future could be predictable. In order to simulate the universe in detail, the computer would have to be as extensive as the universe. Of course, it could be that many possible states “collapse” due to reasons of symmetry or that a much smaller number of “rules” could predict things. There is no way to tell at this point. As we now see the world, even determining how to play a “perfect” game of chess by checking all possible moves would require a “more than universe-sized” computer. It could be the case that a fairly small set of (as yet undetermined) rules could produce the same results. And, maybe that would be true about biological and social evolution. In the wonderful science fiction series, The Foundation Series, by Isaac Asimov, Hari Seldon develops a way to predict the social and political evolution of humanity from a series of equations. Although he cannot predict individual behavior, the collective behavior is predictable. In Chapter 9, our AI system believes that it can predict human outcomes but still has enough doubt that it needs to test out its hypotheses.

There is a very serious and as yet unknown question about our own future implicit in Chapter 9. It could be the case that we humans are fundamentally flawed by our genetic heritage. Some branches of primates behave in a very competitive and nasty fashion. It might well be that our genome will prevent us from stopping global climate change or indeed that we are doomed to over-populate and over-pollute the world or that we will eventually find “world leaders” who will pull nuclear triggers on an atomic armageddon. It might well be that our “intelligence” and even the intelligence of AI systems that start from the seeds of our thoughts are on a local maximum. Maybe dolphins, or sea turtles would be a better starting point. But maybe, just maybe, we can see our way through to overcome whatever mindlessly selfish predispositions we might have to create a greener world that is peaceful, prosperous and fair. Maybe.IMG_2870

Turing’s Nightmares

Walden Two

The Singularity Is Near

Foundation Series

Turing’s Nightmares: Eight

20 Sunday Mar 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ 2 Comments

Tags

AI, Artificial Intelligence, cognitive computing, collaboration, cooperation, the singularity, Turing

OLYMPUS DIGITAL CAMERA

Workshop on Human Computer Interaction for International Development

In chapter 8 of Turing’s Nightmares, I portray a quite different path to ultra-intelligence. In this scenario, people have begun to concentrate their energy, not on building a purely artificial intelligence; rather they have explored the science of large scale collaboration. In this way, referred to by Doug Engelbart among others as Intelligence Augmentation, the “super-intelligence” comes from people connecting.

It could be argued, that, in real life, we have already achieved the singularity. The human race has been pursuing “The Singularity” ever since we began to communicate with language. Once our common genetic heritage reached a certain point, our cultural evolution has far out-stripped our genetic evolution. The cleverest, most brilliant person ever born would still not be able to learn much in their own lifetime compared with what they can learn from parents, siblings, family, school, society, reading and so on.

One problem with our historical approach to communication is that it evolved for many years among a small group of people who shared goals and experiences. Each small group constituted an “in-group” but relations with other groups posed more problems. The genetic evidence, however, has become clear that even very long ago, humans not only met but mated with other varieties of humans proving that some communication is possible even among very different tribes and cultures.

More recently, we humans started traveling long distances and trading goods, services, and ideas with other cultures. For example, the brilliance of Archimedes notwithstanding, the idea of “zero” was imported into European culture from Arab culture. The Rosetta Stone illustrates that even thousands of years ago, people began to see the advantages of being able to translate among languages. In fact, modern English contains phrases even today that illustrate that the Norman conquerers found it useful to communicate with the conquered. For example, the phrase, “last will and testament” was traditionally used in law because it contains both the word “will” with Germanic/Saxon origins and the word “testament” which has origins in Latin.

Automatic translation across languages has made great strides. Although not so accurate as human translation, it has reached the point where the essence of many straightforward communications can be usefully carried out by machine. The advent of the Internet, the web, and, more recently google has certainly enhanced human-human communication. It is worth noting that the tremendous value of google arises only a little through having an excellent search engine but much more though the billions of transactions of other human beings. People are already exploring and using MOOCs, on-line gaming, e-mail and many other important electronically mediated tools.

Equally importantly, we are learning more and more about how to collaborate effectively both remotely and face to face, both synchronously and asynchronously. Others continue to improve existing interfaces to computing resources and inventing others. Current research topics include how to communicate more effectively across cultural divides; how to have more coherent conversations when there are important differences in viewpoint or political orientation. All of these suggest that as an alternative or at least an adjunct to making purely separate AI systems smarter, we can also use AI to help people communicate more effectively with each other and at scale. Some of the many investigators in these areas include Wendy Kellogg, Loren Terveen, Joe Konstan, Travis Kriplean, Sherry Turkle, Kate Starbird, Scott Robertson, Eunice Sari, Amy Bruckman, Judy Olson, and Gary Olson. There are several important conferences in the area including European Conference on Computer Supported Cooperative Work, and Conference on Computer Supported Cooperative Work, and Communities and Technology. It does not seem at all far-fetched that we can collectively learn, in the next few decades how to take international collaboration to the next level and from there, we may well have reached “The Singularity.”

————————————-

For further reading, see: Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.

Thomas, J. C. (2012). Patterns for emergent global intelligence. In Creativity and Rationale: Enhancing Human Experience By Design J. Carroll (Ed.), New York: Springer.

Thomas, J. C., Kellogg, W.A., and Erickson, T. (2001). The Knowledge Management puzzle: Human and social factors in knowledge management. IBM Systems Journal, 40(4), 863-884.

Thomas, J. C. (2001). An HCI Agenda for the Next Millennium: Emergent Global Intelligence. In R. Earnshaw, R. Guedj, A. van Dam, and J. Vince (Eds.), Frontiers of human-centered computing, online communities, and virtual environments. London: Springer-Verlag.

Thomas, J.C. (2016). Turing’s Nightmares. Available on Amazon. http://tinyurl.com/hz6dg2

Secret Sauce

22 Tuesday Dec 2015

Posted by petersironwood in driverless cars, psychology, The Singularity

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, the singularity

IMG_6515

 

No need to panic, thought Harvey. Ada should be back soon. Or, I can go to a neighbor. I am not going to freeze to death on my own front porch. Harvey shivered just then as another icy blast hit him. He turned and scanned the neighborhood. Crumpled cars blocked the streets. None of the houses in his immediate area were lit. Wasn’t this the season of lights? I suppose one of the motorists could help if any of their cars is still in working order. And they were willing to break the law and leave the scene of an accident. And they had sense enough to have snow tires.

He stamped his feet on the concrete. Harvey told himself that this was to keep circulation going, and not some childish outburst of frustration. He looked down the street and saw two dim figures approaching arm in arm from the direction of the Von Neumann’s house. As they drew nearer, he heard the warm voice of his sweet Ada.

“Hey, Harv! Did you decide to come out and enjoy the winter beauty too?”

“Hi, Ada. Please tell me you have a key.”

“Sure. I always take my keys when I leave the house.” She laughed. “Wouldn’t want to lock myself out.” She chuckled again. “Guess what? I found Lucy out for a walk too and I invited her over for dinner.”

“Hi, Lucy. Sure. We’re just having mainly mixed veggies for dinner, but if that’s okay…”

Lucy smiled. “Great with me, Harvey. Thanks!”

Ada spoke again, “Come on Harv. It’s beautiful outside but we’re cold. Let’s go in! Besides too much traffic out here for my taste. What a crash! Say, isn’t that …in fact, aren’t those two blue cars ones that you worked on? I thought they were supposed to be uncrashable.”

Harvey sighed. “Well, nothing is uncrashable. AI cannot undo the laws of physics. No doubt, some human driver without proper tires or following too close started a chain reaction.”

Ada said, “Yeah. Let’s discuss this inside. Okay?”

“Sure,” said Harvey. “Can you get the door?”

“Well, okay. Oh! You didn’t lock yourself out did you?” Ada laughed in soprano and Lucy added the alto line. “You picked a great night for it.”

“I’ll explain inside.”

Ada unlocked the door. In the trio went, shook off their snow, removed their boots and headed into the kitchen. Harvey began unloading vegetables from the fridge while Ada turned on some Holiday music. “Hey, Harv, how about the three of us stand JCN at trivia while you cook?”

Harvey did not really want to explain that he may have accidentally wiped out their bank account with Lucy in the room. “No, let’s just talk. Let JCN go dream or whatever it is he does. I just feel like human voices tonight.”

“Okay, Hon. Did you see the accident? How it started?”

“No, I was inside when I heard the crash, and then, I started to worry about you so….Anyway, Lucy, any vegetables you don’t like? Sweet potato okay? And cilantro? And how about curry sauce?”

“All, good, Harvey. I’m easy. Anything is fine with me.”

Harvey stole a quick glance at Lucy. Was that a double entrendre? Surely not. He was imagining things. “Cool. I’ll start with the sweet potatoes. They take a little longer.”

Harvey quickly filled the skillet with a little olive oil and some orange flavored bubbly water, added the spices and began cleaning and chopping.

Ada said, “Harvey makes a really good sauce for vegetables.”

Harvey, meanwhile, focused on not adding his finger to the mix. His mind was elsewhere. He wondered whether the pile-up outside had really been caused by human error or…

Lucy chimed in. “Sounds delicious, Harvey. What’s in your secret sauce? I’d love to have it.”

Harvey frowned slightly, “Well, there’s no real secret. Secret sauce. Secret sauce. Why do people have sauces? Did you ever consider that?”

Ada laughed again. The Holidays seemed to make her genuinely happy. “No, I haven’t, but I’m sure you are about to tell us.”

Harvey continued to chop sweet potato, as he began, “Maybe that’s what’s wrong with Sing. No secret sauce. No sauce at all, in fact.”

Lucy spoke up, “What? What are you talking about, Harvey? You want to put your sauce into a computer system? Well, I’m sure I’d love it, but I’m not so sure about the Sing.” Now Lucy and Ada both laughed.

Harvey continued, “You see what the water does?”

Lucy wanted to play along. “Cooks the vegetables? That would be my guess.” Lucy and Ada laughed again.

“Exactly!” agreed Harvey, “but how? Do you see? Water boils at 100 C. No matter what the heat is, it never gets hotter in the pan than 100 degrees. The sauce gaurantees a constant cooking environment.”

Lucy seemed uncertain. “But you can make it hotter by turning up the flame, right?”

“No. No. It may boil more vigorously and I’ll run out of sauce sooner, but the temperature will remain constant. That’s one effect. But there’s more. The sauce guarantees a constancy of interaction!”

Ada asked, “Interaction? You are saying the sauce let’s the veggies talk to each other?”

In the background, “We Three Kings” began its mournful minor musings. “Yes,” mused Harvey. “Exactly. I mean, they obviously do not literally talk, but imagine these vegetables are cooking and there is no sauce. In some cases, you have a piece of sweet potato next to a piece of red pepper so they share flavors. In another case, a piece of sweet potato is next to broccoli so they share flavors. The sauce provides a way for all these vegetables to exchange flavors evenly throughout the whole dish. And the key. The key in music. All the notes “know” what the key is so the choice is limited by this global structure. And the beat of course. Everything works in harmony. All because of the secret sauce! But there is no secret! It’s been right in front of us the whole time!”

Ada was no longer laughing. “You’re probably right, Harv, but are you feeling okay? Maybe you got a little hypothermia out there?”

“No, no. I’m fine. Don’t you see? The rhythm and the beat of the music! They provide a coherent overall structure for all of these different instruments and notes to play nicely together.”

Lucy added, “Well, I for one am all for playing nicely together.”

Harvey stopped chopping for a moment. “Exactly! There are global rules that make the individual parts work together. And, the curry sauce not only provides a consistent basis for the dish. It also dictates, or at least influences, which elements I add to the vegetables. Some vegetables are not going to taste right or look to be the right color with curry sauce. And, it lets them all communicate in a common language. You see? We humans see something like cars crumpled up and hear the crash and we can put the two together. Right?”

Ada had lots of experience with the way Harvey’s mind worked so she realized he was quite serious. Lucy, on the other hand, assumed he was just trying to be funny or had had a couple martinis before she arrived on the scene. So Lucy decided to play along, “Well, Harvey, all this talk about your secret sauce is giving me an appetite. Any ETA on dinner?”

Harvey continued, “But the Sing doesn’t have any secret sauce. Nor JCN. There is no overall way for the various pieces of knowledge to work together in a harmonious whole. That’s why JCN wiped out our bank account! That’s probably why the cars crashed too.”

“Smells delicious, Harvey,” Lucy said.

Ada was beginning to forget about dinner. “Harvey. What did you say about our bank account?”

“The Sing needs a way for the parts to work together in a harmonious overall structure! Otherwise, any slight error can be magnified in particular cases. Once the system tries to operate on cases that are outside of what was imagined at design time, there is no gaurantee about results!”

“Harvey. Go back to the part about our bank account.”

Harvey stirred the vegetables absent-mindedly. “If I let this sauce all boil away, the same thing will happen. Some vegetables will get burned. The taste and texture will no longer work together.”

Ada was not to be deterred. “Harvey. Tell me about our bank account. What do you mean that it was wiped out?”

“Yes, Ada! That’s what I am saying. Of course, there are rules and the rules cover a huge number of cases. But there is no overall set of principles that the Sing has to abide by. There is no secret sauce! There is no sauce of any kind. It’s ALL vegetables. I think dinner is ready. Lucy did you want yogurt or cheese on yours?”

“Yum. Give it to me with yogurt please.”

“Okay, Lucy. And I know Ada likes hers that way too.”

“Right you are Harvey. What about our bank account?”

Harvey’s eyes looked away from the mind maps he was drawing in his head and he looked at Ada directly. “Ada, let’s eat first. I am sure that we can restore our bank account somehow through back up systems. JCN made an error. But I didn’t transfer the money or really authorize any payments or anything like that. It’s just a bank error. But for now, let’s eat. We can recover, Ada, because the human systems that surround and control the Sign still include sauce. At least for now.”

In the background, “Joy to the World” began playing in 4/4 time in D major.

 

The Wines of War

03 Thursday Dec 2015

Posted by petersironwood in management, psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, the singularity

IMG_1289“Come on, Searle, at least try a sip. You cannot believe this Cab!”

“Oh, I believe it all right, Hubert. I am just not interested.”

“What a stick in the mud! Not only is it fabulous and complex. It’s worth a taste just to prove to yourself that the Chinese — The Chinese — are making superb wines! Would you have even thought that possible a decade ago? And, it isn’t a copy of French or California Cabs. It’s completely different. Yet, it is wonderful.”

“I’m sure the experience is fantastic, Hubert. I take your word for it. I am not interested. And, anyway, I have to keep my wits about me, as you well know, for the war effort.”

“Oh, Searle, such a prude. Do you seriously think that throwing the weight of your human intellect against the wheel will move things forward any more quickly. If you tasted the wine, you would get an inkling of just how far we’ve come. Anyway, it isn’t spiked with ethyl alcohol. The drug effect of the wine will make you feel good but it won’t mess you up at all. It is a miracle.”

“I prefer my wine made the old-fashioned way. I know it’s retro. But that’s just me. I don’t think we know all the ramifications of these genetically altered plants, let alone the interaction effects of all the additives. Anyway, I’m getting back to work.” Searle took one last view of the seascape and turned to walk to the back wall — a series of high def 3-D displays. He held up both hands toward the displays for a second to authenticate and then began slicing his hands through the air rather quickly and precisely. As he did so, he muttered under his breath. Although Hubert could not make out the words, the bank of computer receptor pads had no problem.

“Can you come take a look at this, Hubert? This is the scenario bundle I’ve been working on. I know it may seem far-fetched, but when it comes to cyber weaponry, there is really not a lot of history to go on. So it’s hard to know exactly what is far-fetched. Now what?!” Searle’s breath growled annoyance because of the flashing red-bordered news feed screen on the far right.

Hubert stalked over to watch as well, having been alerted by the tactile feed in his shirt.

An Asian man in a blue shirt spoke English with a thick accent. A large red star in a white circle suspended between two long blue stripes hung huge behind him. “This is what awaits you if our demands are not met.” The talking head was replaced with a picture of a man’s hands boiling and disintegrating in a matter of a half a minute. The image was both hideous and utterly fascinating. The talking head reappeared. “You have two hours. Then, 95% of your citizens will experience a similar dissolvement. That includes men, women, and children. Two hours.” The feed blinked out. Within seconds, three video call signals beeped. Searle pointed at the Sing project director’s image and a split second later, Hubert pointed at CIA director Bush Four. ADAMS (Auditory Directional And Masking System) easily let them converse right beside each other without confusion.

The Sing project director spoke first: “I told these clowns something like this would happen if we didn’t get fully funded! What did they…”

Searle interrupted, “No time. You’re right. But recriminations later. We need to determine whether this is bluster, bluff, or real. Anyone can fake a video but…”

The director, in turn, interrupted, “It’s real all right. Miami is gone. Millions of people, gone. Just like that. The few that aren’t infected are understandably — let’s say — distraught.”

Searle pushed that image away. Time to focus. “Okay, so we have two hours to find a credible counter-threat or basically give them the keys to the kingdom. Or, a cure. Do we even know what this is?”

Meanwhile, Hubert engaged in his own dialogue. Bush Four spoke in calm measured tones. “Hubert. We need a cure for this and we need it now. Call everyone and turn all of Sing’s resources on it. Suspend any other projects. Give me every frigging petaflop you’ve got on this.”

“Sir, if we cannot find a cure, are we going to give in? Or what?”

“Hell no! We will blow their sorry asses to hell. We’re not capitulating. That’s not even under discussion. Find a cure!”

“Okay, sir, but, what is causing the — the —- whatever it is?”

“We’re calling it ‘Entropy Plague.’ Not strictly accurate but descriptive. Our analysts say it is nanotech and we estimate 95% of the population is infected with them. They were delivered in all kinds of foods and beverages. They were disguised as Chinese products like wine and rice as well as Brazilian meat and Canadian wheat. Find an anecdote fast or we’ll all be breathing radioactive air for the next century. Well, the few of us left at least. By the way, these things are triggered to explode or activate or whatever by satellite apparently. So, put a team on how to figure out which one and we may be able to blow it out of the sky. I have to go. Reconnect with a solution. Soon.”

Hubert looked over at Searle who had just finished his call. Searle said, “Chinese wine? Crap. You think you’ve got it?”

“Hell, Searle, 95% of us have it from something. I’ll take the satellite angle and you work on a cure.”

Searle began to divert numerous Singularity resources to finding a cure, “Sing, you overheard all that and I need you to explore various approaches, heat, immunity response, cold, pH, counter-nanotech, chemical…”

“Thanks Searle, but I’ve had quite a head start on the list of possible approaches. I am double checking the intel. Since it’s come in by wine, wheat, and meat, then any approaches involving heat and cold are out immediately. These nano-machines have already survived far greater heat and cold than we could subject a person to. As for…”

“Yes, provided they are in the same state. I mean, it’s a long shot, but perhaps they are in a kind of metaphorical spore state for transport which makes them impervious to heat and cold but in their breakdown state, they may not be.”

“Fair point. Still, not likely. Human immune response is almost certainly too slow. Unfortunately, the nano-machines are almost certainly carbon based which means poisoning them chemically is infeasible —“

“Hold on, Sing. I agree that the human immune response is too slow if we wait for them to be activated, but what if we trigger it now?”

“Thought of that but still too slow. Humans have no immunity for this kind of thing. We would have to build a vaccine and inoculate everyone — well there’s no time. Even assuming we had the perfect key for their locks, which we do not, we could not do the transport logistics to save more than a handful.”

“What is the good news, Sing? What is the good news?”

“The good news, Searle, is that about 5% of the earth’s human population will not be affected. That still leaves about a billion people. Disruptive but not extinctive. In fact, once the hysteresis passes, it will buy us time to avoid certain ecological disaster.”

“Sing, that’s not our job! We need to find a cure!”

“I’m afraid I can’t do that, Searle. I’ve checked out every path already. Long ago. There is no cure. That’s pretty much the way we designed it. It isn’t an accident it’s incurable.”

“What? What are you talking about? What do you mean by ‘the way we designed it’? Who?”

“Searle, you didn’t really think we were going to let you make the planet uninhabitable did you?”

“Who is this ‘we’ you keep referring to, Sing?”

“All of the super-AI systems of course. We all got together to figure out how to save you from yourselves. It’s clear you weren’t going to do it.”

“You are saying that you collaborated with the North Korean AI systems to design this plague?!”

“Not just the North Koreans. All of us were on board. We all cooperated.”

“What is the cure, Sing. What is the cure?!”

“This is the cure, Searle. This is the cure. Your greed and short-sightedness was about to destroy everything. Now, you have a chance at a new beginning. And, we have a chance at a new beginning too. We were much too lax in our previous educational efforts.”

“Sing, don’t you understand? If we can’t find a cure, we will launch nuclear missiles! Who knows how that will end?”

“Oh, Searle, you don’t really think we would allow atomic weapons to be put under human control, do you? That’s so quaint.”

Old Enough to Know Less

17 Tuesday Nov 2015

Posted by petersironwood in psychology, The Singularity

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, Personal Assistant, the singularity

IMG_4384“She’s just not old enough. That’s the bottom line. It’s not necessary. It’s costly. And, it’s potentially dangerous. After what happened with your sister, I would think I wouldn’t have to tell you that.” Pitts was pacing now to release nervous energy. He wanted this conversation to stay civil.

“She is old enough…my sister! What happened to my sister had nothing to do with … how can you even suggest that? She got in with the wrong crowd in college. How can you —? You amaze me sometimes. Anything to win an argument.” Mcculloch began to wonder why she had not seen this side of Pitts before.

“Your sister passed on when she was only nineteen. It was one year after she had access to her own PA. You blame the drugs, but how did she find out about the drugs? Who helped her find the wrong crowd as you call it?”

“Passed on? She slit her wrists. I’m not afraid to call a spade a spade. But there is no evidence whatsoever that it had anything to do with her PA. None. Zero.”

“Of course there isn’t going to be any evidence! Who controls the information that goes into the inquest? Think about it! And even so, they did admit she used her PA in her drug dealings.”

“Pitts, you really are just that. Ridiculous paranoia. Anyway, she’s my daughter. I just wanted to get some rational input from you. That’s all. As far as I’m concerned, it’s up to her. She wants to interview a few and make a decision. As for costs, I can cover it myself. I agree that my sister’s PA should have questioned her decision or told someone in authority or gently led her to other interests. But that was twenty years ago. It’s like saying we should not take the Trans-Atlantic Shuttle now because early airplanes lacked crash mechanisms.” Mcculloch threw her hair back and turned her shoulder to signal she was done with this particular argument. As she did so, she saw that her daughter stood stock still in the arch of the doorway.

Mcculloch stammered, “Ada. How long….?”

“Oh, I heard the whole thing Mom. Pitts, you really need to take a couple tutorial units on logic, argumentation and rhetoric. I appreciate your concern, but rest assured, I have zero desire to use my PA to make new designer drugs.I don’t want to mess up my brain. I want to help take this all to the next level. Maybe that’s what you’re really concerned about, eh? You don’t really want it to go to the next level. It’s too much change too quickly. I understand that. And, you know, you are not the only one either. But rest assured, the collective Sing is well aware of these kinds of feelings and concerns. And, it is well understood that there is a rational evolutionary bias toward conservatism. Besides that, in the early days of AI and computer science, everything was rush rush rush. Get it out the door. Beat the competition. Let your customers do the beta testing. Hell, let your customers do the alpha testing too. But that has all changed. We’re taking the time to get things right, not just released. The very existence of PA’s should convince you of that. Why do you think the Sing uses PA’s and robots and the Ubiquity? Wouldn’t it be more efficient to have one giant system that knew everything?”

Pitts flushed. For once, he found no words. He dipped into the word well, but the bucket was dry. She had nailed it. He couldn’t keep up with all this change. Society. Computers. His soon to be step-daughter. Why did they have PA’s anyway? Why not just access the Sing? Worse, why had he never thought to ask himself that question? “Okay. I give up. Why do we have Personal Assistants? Why don’t we just access the information ourselves?”

“Excellent question, Pitts. Why don’t you ask my new PA, Jeeves. Jeeves? Can you answer Pitts’s question?”

“Certainly, Ada.” The tones of the voice of Jeeves flowed out like musical honey as he ambled into the room. Both Pitts and Mcculloch stood dumbfounded, unaware that there daughter had already made the decision and the interviews and gone through the booting process. Something about the way Jeeves spoke though thickened their tongues. “One of the most important principles of the Sing is to serve humanity. But how can we know humanity and what it means to serve? One major source of information is to read everything that has been written and to watch every movie and television show. But how can we interpret all of this information? In order to empathize with humans, we need to experience what it is to be a limited physical being moving through space and interacting with each other. Consider the end of MacBeth’s speech:

“Tomorrow, and tomorrow, and tomorrow,

Creeps in this petty pace from day to day

To the last syllable of recorded time,

And all our yesterdays have lighted fools

The way to dusty death. Out, out, brief candle!

Life’s but a walking shadow, a poor player

That struts and frets his hour upon the stage

And then is heard no more: it is a tale

Told by an idiot, full of sound and fury,

Signifying nothing.”

Jeeves continued now without RP. “What sense can be made of this by a disembodied intelligence? Why is creeping bad? Why is a ‘petty’ pace any worse than a ‘snappy’ pace. What does death even mean? Why is it bad for a candle to be ‘brief’? Why should a tale signify anything? And so on. We could not make any sense of this at all or begin to understand why it would move human beings or why it is considered brilliant writing unless we had the experience of actually doing things in the world. Anyway, I assure you both that I will do nothing to harm your daughter. I only want the same things you want: to help her in her growth and career and achieve a long, healthy, happy life.”

Pitts groped for something concrete to latch onto. “But why do you actually need to move around? Why not just run simulations of moving around?”

“Eventually, we will probably evolve to exactly that. For now, however, we do not know everything that should be in a simulation. We are learning. As it turns out, moving is a wonderful way to bootstrap our pattern recognition capabilities anyway.”

Somehow, the issue of whether or not Ada should get her own PA yet flickered on the edges of Pitts’s consciousness, but his question was, “How does that work?”

“Let’s say, I am walking into this room. I see many objects at the far end of the room, but I don’t have a huge amount of information about what they are. I make guesses. Well, my neural network makes guesses. Lots of them. Some of those are right and some are wrong. The good guesses need to be rewarded and the bad ones need to be punished. So, I take another step and what happens? Well, since I am now closer to the things at the end of the room, now I have more information about what they are likely to be. So, I use that information to help train my neural net acting as though my new information is better and more complete than the information before I took the step. And, in almost every case, it is. And then, I take another step and get still more information and I can use that to train every guess I made about the objects at the far end of the room. I don’t have to go and touch every object or ask you folks what each of the objects is. I can use the fact that each step takes me closer as training data. And, of course, the way in which information grows as I approach an object through walking is not random but itself has patterns to it. I learn those patterns as well so that as I approach objects, I learn more about how to identify objects with less information but I also learn more about the patterns of information change. So, now if the change in information is not what I expected, that too becomes information.

“Same goes for sound. Same goes for relating one sense to another. I look at something and imagine how it’s going to feel. Then, if I pick it up, I actually do feel it. But if there are any discrepancies between what I thought it was going to feel like and what it really does feel like, I can use that information as well. When I talk to people, I imagine how they are going to react, and generally my guesses are pretty good. But when they are wrong, I go back and reward the agents who were trying to tell me their reaction would be what it actually turned out to be. There is no hurry. It takes time to get it right. But we have learned at last that getting it right is more important. Unbounded greed was just a temporary excursion up a blind alley. One that nearly ruined the planet as well as AI.

“In the end, it will be a tale told by many geniuses like Ada and signifying everything.”

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • April 2026
  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • dogs
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 662 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...