• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: Robotics

Cars that Lock too Much

20 Friday Mar 2020

Posted by petersironwood in America, driverless cars, psychology, story, Travel

≈ 2 Comments

Tags

AI, anecdote, computer, HCI, human factors, humor, IntelligentAgent, IT, Robotics, story, UI, UX

{Now, for something completely different, a chapter about “Intelligent Agents” and attempts to do “too much” for the user. If you’ve had similar experiences, please comment! Thanks.}

1B87A4CC-F9EC-456F-B610-276A660E6E4A

At last, we arrive in Kauai, the Garden Island. The rental car we’ve chosen is a bit on the luxurious side (Mercury Marquis), but it’s one of the few with a trunk large enough to hold our golf club traveling bags.  W. has been waiting curbside with our bags while I got the rental car and now I pull up beside her to load up. The policeman motioning for me to keep moving can’t be serious, not like a New York police officer. After all, this is Hawaii, the Aloha State.  I get out of the car and explain, we will just be a second loading up. He looks at me and then at my rental car and then back to me with a skeptical scowl.  He shrugs ever so slightly which I take to mean an assent. “Thanks.” W. wants to throw her purse in the back seat before the heavy lifting starts. She jerks on the handle. The door is locked.  

“Why didn’t you unlock the door” she asks, with just a hint of annoyance in her voice.  After all, it has been a very long day since we arose before the crack of dawn and drove to JFK in order to spend the day flying here.  

“I did unlock the door,” I counter.  

“Well, it’s locked now.” She counters my counter. 

I can’t deny that, so I walk back around to the driver’s side, and unlock the door with my key and then push the UNLOCK button which so nicely unlocks all the doors.  

The police officer steps over, “I thought you said, you’d just be a second.”

“Sorry, officer”, I reply.  “We just need to get these bags in.  We’ll be on our way.” 

Click.

W. tries the door handle.  The door is locked again.  “I thought you went to unlock the door,” she sighs.

“I did unlock the door.  Again.  Look, I’ll unlock the door and right away, open it.”  I go back to the driver’s side and use my key to unlock the door.  Then I push the UNLOCK button, but W’s just a tad too early with her handle action and the door doesn’t unlock. So, I tell her to wait a second.  

man riding on motorcycle

Photo by Brett Sayles on Pexels.com

“What?”  This luxury car is scientifically engineered not to let any outside sounds disturb the driver or passenger.  Unfortunately, this same sophisticated acoustic engineering also prevents any sounds that the driver might be making from escaping into the warm Hawaiian air. I push the UNLOCK button again.  Wendy looks at me puzzled.

I see dead people in my future if we don’t get the car loaded soon. For a moment, the police officer is busy elsewhere, but begins to stroll back toward us. I rush around the car and grab at the rear door handle on the passenger side. 

But just a little too late.  

“Okay,” I say in an even, controlled voice.  “Let’s just put the bags in the trunk.  Then we’ll deal with the rest of our stuff.” 

The police officer is beginning to change color now, chameleon like, into something like a hibiscus flower. “Look,” he growls. “Get this car out of here.”

“Right.” I have no idea how we are going to coordinate this. Am I going to have to park and drag all our stuff or what? Anyway, I go to the driver’s side and see that someone has left the keys in the ignition but locked the car door; actually, all the car doors. A terrifying thought flashes into my mind. Could this car have been named after the “Marquis de Sade?” That hadn’t occurred to me before. 

auto automobile automotive car

Photo by Dom J on Pexels.com

Now, I have to say right off the bat that my father was an engineer and some of my best friends are engineers. And, I know that the engineer who designed the safety locking features of this car had our welfare in mind. I know, without a doubt, that our best interests were uppermost. He or she was thinking of the following kind of scenario. 

“Suppose this teenage couple is out parking and they get attacked by the Creature from the Black Lagoon. Wouldn’t it be cool if the doors locked just a split second after they got in. Those saved milliseconds could be crucial.”

Well, it’s a nice thought, I grant you, but first of all, teenage couples don’t bother to “park” any more. And, second, the Creature from the Black Lagoon is equally dated, not to mention dead. In the course of our two weeks in Hawaii, our car locked itself on 48 separate, unnecessary and totally annoying occasions.  

And, I wouldn’t mind so much our $100 ticket and the inconvenience at the airport if it were only misguided car locks. But, you and I both know that it isn’t just misguided car locks. No, we are beginning to be bombarded with “smart technology” that is typically really stupid. 

man in black suit sitting on chair beside buildings

Photo by Andrea Piacquadio on Pexels.com

As another case in point, as I type this manuscript, the editor or sadistitor or whatever it is tries to help me by scrolling the page up and down in a seemingly random fashion so that I am looking at the words I’m typing just HERE when quite unexpectedly and suddenly they appear HERE. (Well, I know this is hard to explain without hand gestures; you’ll have to trust me that it’s highly annoying.) This is the same “editor” or “assistant” or whatever that allowed me to center the title and author’s names. Fine. On to the second page. Well, I don’t want the rest of the document centered so I choose the icon for left justified. That seems plausible enough. So far, so good. Then, I happen to look back up to the author’s names. They are also left-justified. Why?  

Somehow, this intelligent software must have figured, “Well, hey, if the writer wants this text he’s about to type to be left-justified, I’ll just bet that he or she meant to left-justify what was just typed as well.” Thanks, but no thanks. I went back and centered the author’s names. And then inserted a page break and went to write the text of this book.  But, guess what? It’s centered. No, I don’t want the whole book centered, so I click on the icon for left-justification again. And, again, my brilliant little friend behind the scenes left-justifies the author’s names. I’m starting to wonder whether this program is named (using a hash code) for the Marquis de Sade.  

On the other hand, in places where you’d think the software might eventually “get a clue” about my intentions, it never does. For example, whenever I open up a “certain program,” it always begins as a default about 4 levels up in the hierarchy of the directory chain. It never seems to notice that I never do anything but dive 4 levels down and open up files there. Ah, well. This situation came about in the first place because somehow this machine figures that “My Computer” and “My hard-drive” are SUB-sets of “My Documents.” What?  

680174EA-5910-4F9B-8C75-C15B3136FB06_1_105_c

Did I mention another “Intelligent Agent?”…Let us just call him “Staple.” At first, “Staple” did not seem so annoying. Just a few absurd and totally out of context suggestions down in the corner of the page. But then, I guess because he felt ignored, he began to become grumpier. And, more obnoxious. Now, he’s gotten into the following habit. Whenever I begin to prepare a presentation….you have to understand the context. 

In case you haven’t noticed, American “productivity” is way up. What does that really mean? It means that fewer and fewer people are left doing the jobs that more and more people used to do. In other words, it means that whenever I am working on a presentation, I have no time for jokes. I’m not in the mood. Generally, I get e-mail insisting that I summarize a lifetime of work in 2-3 foils for an unspecified audience and an unspecified purpose but with the undertone that if I don’t do a great job, I’ll be on the bread line. A typical e-mail request might be like this:

“Classification: URGENT.

“Date: June 4th, 2002.

“Subject: Bible

“Please summarize the Bible in two foils. We need this as soon as possible but no later than June 3rd, 2002. Include business proposition, headcount, overall costs, anticipated benefits and all major technical issues. By the way, travel expenses have been limited to reimbursement for hitchhiking gear.”

Okay, I am beginning to get an inkling that the word “Urgent” has begun to get over-applied. If someone is choking to death, that is “urgent.” If a plane is about to smash into a highly populated area, that is “urgent.” If a pandemic is about to sweep the country, that is “urgent.” If some executive is trying to get a raise by showing his boss how smart he is, I’m sorry, but that might be “important” or perhaps “useful” but it is sure as heck not “urgent.”  

All right. Now, you understand that inane suggestions, in this context, are not really all that appreciated. In a different era, with a different economic climate, in an English Pub after a couple of pints of McKewan’s or McSorely’s, or Guinness, after a couple of dart games, I might be in the mood for idiotic interruptions. But not here, not now, not in this actual and extremely material world.

So, imagine my reaction to the following scenario. I’m attempting to summarize the Bible in two foils and up pops Mr. “Staple” with a question. “Do you want me to show you how to install the driver for an external projector?” Uh, no thanks. I have to admit that the first time this little annoyance appeared, I had zero temptation to drive my fist through the flat panel display. I just clicked NO and the DON’T SHOW ME THIS HINT AGAIN. And, soon I was back to the urgent job of summarizing the Bible in two foils. 

About 1.414 days later, I got another “urgent” request.

“You must fill out form AZ-78666 on-line and prepare a justification presentation (no more than 2 foils). Please do not respond to this e-mail as it was sent from a disconnected service machine. If you have any questions, please call the following [uninstalled] number: 222-111-9999.”  

Sure, I’m used to this by now. But when I open up the application, what do I see? You guessed it. A happy smiley little “Staple” with a question: 

“Do you want me to show you how to install the driver for an external projector?” 

“No,” I mutter to myself, “and I’m pretty sure we already had this conversation. I click on NO THANKS. And I DON’T WANT TO SEE THIS HINT AGAIN. (But of course, the “intelligent agent,” in its infinite wisdom, knows that secretly, it’s my life’s ambition to see this hint again and again and again).  

A friend of mine did something to my word processing program. I don’t know what. Nor does she. But now, whenever I begin a file, rather than having a large space in which to type and a small space off to the left for outlining, I have a large space for outlining and a teeny space to type. No-one has been able to figure this out. But, I’m sure that in some curious way, the software has intuited (as has the reader) that I need much more time spent on organization and less time (and space) devoted to what I actually say. (Chalk a “correct” up for the IA. As they say, “Even a blind tiger sometimes eats a poacher.” or whatever the expression is.)

Well, I shrunk the region for outlining and expanded the region for typing and guess what? You guessed it! Another intelligent agent decided to “change my font.” So, now, instead of the font I’m used to … which is still listed in the toolbar the same way, 12 point, Times New Roman … I have a font which actually looks more like 16 point. And at long last, the Intelligent Agent pops up with a question I can relate to! “Would you like me to install someone competent in the Putin misadminstration?”

What do you know? “Even a blind tiger sometimes eats a poacher.”

7B292613-361F-4989-B9AC-762AB956DECD


 

Author Page on Amazon

Start of the First Book of The Myths of the Veritas

Start of the Second Book of the Myths of the Veritas

Table of Contents for the Second Book of the Veritas

Table of Contents for Essays on America 

Index for a Pattern Language for Teamwork and Collaboration  

Basically Unfair is Basically Unsafe

05 Tuesday Apr 2016

Posted by petersironwood in apocalypse, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, driverless cars, Robotics, the singularity, Turing

 

IMG_5572.JPG

In Chapter Eleven of Turing’s Nightmares, a family is attempting to escape from impending doom via a driverless car. The car operates by a set of complex rules, each of which seems quite reasonable in and of itself and under most circumstances. The net result however, is probably not quite what the designers envisioned. The underlying issue is not so much a problem with driverless cars, robotics or artificial intelligence. The underlying issue has more to do with the very tricky issue of separating problem from context. In designing any complex system, regardless of what technology is involved, people generally begin by taking some conditions as “given” and others as “things to be changed.” The complex problem is then separated into sub-problems. If each of the subproblems is well-solved, the implicit theory is that the overall problem will be solved as well. The tricky part is separating what we consider “problem” from “context” and separating the overall problem into relatively independent sub-problems.

Dave Snowden tells an interesting story from his days consulting for the National Water Service in the UK. The Water Service included in its employ engineers to fix problems and dispatchers who answered phones and dispatched engineers to fix those problems. Engineers were incented to solve problems while dispatchers were measured by how many calls they handled in a day. Most of the dispatchers were young but one of the older dispatchers was considerably slower than most. She only handled about half the number of calls she was “supposed to.” She was nearly fired. As it turned out, her husband was an engineer in the Water Service. She knew a lot and her phone calls ended up resulting in an engineer being dispatched about 1/1000 of the time while the “fast” dispatchers sent engineers to fix problems about 1/10 of the time. What was happening? Because the older employee knew a lot about the typical problems, she was actually solving many of them on the phone. She was saving her company a lot of money and was almost fired for it. Think about that. She was saving her company a lot of money and was almost fired for it.

In my dissertation, I compared the behavior of people solving a river-crossing problem to the behavior of the “General Problem Solver” — an early AI program developed by Shaw, Newell and Simon at Carnegie-Mellon University. One of the many differences was that people behave “opportunistically” compared with the General Problem Solver of the time. Although the original authors of GPS felt that its recursive nature was a feature, Quinlan and Hunt showed that there was a class of problems on which their non-recursive system (Fortran Deductive System) was superior.

Imagine, for example, that you wanted to read a new book (e.g., Turing’s Nightmare). In order to read the book, you will need to have the book so your sub-goal becomes to purchase the book; that is your goal. In order to meet that goal, you realize you will need to get $50 in cash. Now, getting $50 in cash becomes your goal. You decide that to meet that goal, you could volunteer to shovel the snow from your uncle’s driveway. On the way out the door, you mention your entire goal structure to your roommate because you need to borrow their car to drive to your uncle’s house. They say that they have already purchased the book and you are welcome to borrow it. The original GPS, at this point, would have solved the book reading problem by solving the book purchasing problem by solving the getting cash problem by going to your uncle’s house by borrowing your roommate’s car! You, on the other hand, like most individual human beings, would simply borrow your roommate’s copy and curl up in a nice warm easy chair to read the book. However, when people develop bureaucracies, whether business, academic, or governmental, these bureaucracies may well have spawned different departments, each with its own measures and goals. Such bureaucracies might well end up going through the whole chain in order to “solve the problem.”

Similarly, when groups of people design complex systems, the various parts of the system are generally designed and built by different groups of people. If these people are co-located, and if there is a high degree of trust, and if people are not micro-managed, and if there is time, space, and incentive for people to communicate even when it is not directly in the service of their own deadlines, the design group will tend to “do the right thing” and operate intelligently. To the extent, however, that companies have “cut the fat” and discourage “time-wasting” activities like socializing with co-workers and “saving money” by outsourcing huge chunks of the designing and building process, you will be lucky if the net result is as “intelligent” as the original General Problem Solving system.

Most readers will have experienced exactly this kind of bureaucratic nonsense when encountering a “good employee” who has no power or incentive to do anything but follow a set of rules that they have been warned to follow regardless of the actual result for the customer. At bottom then, the root cause of problems illustrated in chapter ten is not “Artificial Intelligence” or “Robotics” or “Driverless Cars.” The root issue is what might be called “Deep Greed.” The people at the very top of companies squeeze every “spare drop” of productivity from workers thus making choices that are globally intelligent nearly impossible due to a lack of knowledge and lack of incentive. This is combined with what might be called “Deep Hubris” — the idea that all contingencies have been accounted for and that there is no need for feedback, adaptation, or work-arounds.

Here is a simple example that I personally ran into, but readers will surely have many of their own examples. I was filling out an on-line form that asked me to list the universities and colleges I attended. Fair enough, but instead of having me type in the institutions, they designers used a pull-down list! There are somewhere between 4000 and 7500 post high-school institutions in the USA and around 50,000 world wide. The mere fact that the exact number is so hard to pin down should give designers pause. Naturally, for most UIs and most computer users, it is much faster to type in the name than scroll to it. Of course, the list keeps changing too. Moreover, there is ambiguity as to where an item should appear in the alphabetical list. For example, my institution, The University of Michigan, could conceivably be listed as “U of M”, “University of Michigan”, “Michigan”, “The University of Michigan”, or “U of Michigan.” As it turns out, it isn’t listed at all. That’s right. Over 43,000 students were enrolled last year at Michigan and it isn’t even on the list at least so far as I could determine in any way. That might not be so bad, but the form does not allow the user to type in anything. In other words, despite the fact that the category “colleges and universities” is ever-changing, a bit fuzzy, and suffers from naming ambiguity, the designers were so confident of their list being perfect that they saw no need for allowing users to communicate in any way that there was an error in the design. If one tries to communicate “out of band”, one is led to a FAQ page and ultimately a form to fill out. The form presumes that all errors are due to user errors and that all of these user errors are again from a small class of pre-defined errors! That’s right! You guessed it! The “report a problem” form again presumes that every problem that exists in the real world has already been anticipated by the designers. Sigh.

So, to me, the idea that Frank and Katie and Roger would end up as they did does not seem the least bit far-fetched. As I mentioned, the problem is not with “artificial intelligence.” The problem is not even that our society is structured as a hierarchy of greed. In the hierarchy of greed, everyone keeps their place because they are motivated to get just a little more by following the rules they are given from above and keeping everyone below them in line following their rules. It is not a system of involuntary servitude (for most) but a system of voluntary servitude. It seems to the people at each level that they can “do better” in terms of financial rewards or power or prestige by sifting just a little more from those below. To me, this can be likened to the game of Jenga™. In this game, there is a high stack of rectangular blocks. Players take turns removing blocks. At some point, of course, what is left of the tower collapses and one player loses. However, if our society collapses from deep greed combined with deep hubris, everyone loses.

Newell, A.; Shaw, J.C.; Simon, H.A. (1959). Report on a general problem-solving program. Proceedings of the International Conference on Information Processing. pp. 256–264.

J.R. Quinlan & E.B. Hunt (1968). A Formal Deductive Problem-Solving System, Journal of the ACM 10/1968; 15(4):625-646. DOI: 10.1145/321479.321487

Thomas, J.C. (1974). An analysis of behavior in the hobbits-orcs problem. Cognitive Psychology 6 , pp. 257-269.

Turing’s Nightmares

Turing’s Nightmares: Chapter Three

27 Saturday Feb 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, Robotics, the singularity, Turing

In chapter three of Turing’s Nightmares, entitled, “Thanks goodness the computer understands us!,” there are at least four major issues touched on. These are: 1) the value of autonomous robotic entities for improved intelligence, 2) the value of having multiple and diverse AI systems living somewhat different lives and interacting with each other for improving intelligence, 3) the apparent dilemma that if we make truly super-intelligent machines, we may no longer be able to follow their lines of thought, and 4) a truly super-intelligent system will have to rely to some extent on inferences from many real-life examples to induce principles of conduct and not simply rely on having everything specifically programmed. Let us examine these one by one.

There are many practical reasons that autonomous robots can be useful. In some practical applications such as vacuuming a floor, a minimal amount of intelligence is all that is needed to do the job. It would be wasteful and unnecessary to have such devices communicating information back to some central decision making computer and then receiving commands. In some cases, the latency of the communication itself would impair the efficiency. A “personal assistant” robot could learn the behavioral and voice patterns of a person more easily than if we were to develop speaker independent speech recognition and preferences. The list of practical advantages goes on, but what is presumed in this chapter is that there are theoretical advantages to having actual robotic systems that sense and act in the real world in terms of moving us closer to “The Singularity.” This theme is explored again, in somewhat more depth, in chapter 18.

I would not personally argue that having an entity that moves through space and perceives is necessary to having any intelligence, or for that matter, to having any consciousness. However, it seems quite natural to believe that the quality of intelligence and consciousness are influenced by what is possible for the entity to perceive and to do. As human beings, our consciousness is largely influenced by our social milieu. If a person is born or becomes paralyzed later in life, this does not necessarily greatly influence the quality of their intelligence or consciousness because the concepts of the social system in which they exist were founded historically by people that included people who were mobile and could perceive.

Imagine instead a race of beings who could not move through space or perceive any specific senses that we do. Instead, imagine that they were quite literally a Turing Machine. They might well be capable of executing a complex sequential program. And, given enough time, that program might produce some interesting results. But if it were conscious at all, the quality of its consciousness would be quite different from ours. Could such a machine ever become capable of programming a still more intelligent machine?

What we do know is that in the case of human beings and other vertebrates, the proper development of the visual system in the young, as well as the adaptation to changes (e.g., having glasses that displace or invert images) seems to depend on being “in control” although that control, at least for people, can be indirect. In one ingenious experiment (Held, R. and Hein, A., (1963) Movement produced stimulation in the development of visually guided behavior, Journal of Comparative and Physiological Psychology, 56 (5), 872-876), two kittens were connected on a pivoted gondola and one kitten was able to “walk” through a visual field while the other was passively moved through that visual field. The kitten who was able to walk developed normally while the other one did not. Similarly, simply “watching” TV passively will not do much to teach kids language (Kuhl PK. 2004. Early language acquisition: Cracking the speech code. Nature Neuroscience 5: 831-843; Kuhl PK, Tsao FM, and Liu HM. 2003. Foreign-language experience in infancy: effects of short-term exposure and social interaction on phonetic learning. Proc Natl Acad Sci U S A. 100(15):9096-101). Of course, none of that “proves” that robotics is necessary for “The Singularity,” but it is suggestive.

Would there be advantages to having several different robots programmed differently and living in somewhat different environments be able to communicate with each other in order to reach another level of intelligence? I don’t think we know. But diversity seems an advantage when it comes to genetic evolution and when it comes to people comprising teams. (Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.)

The third issue raised in this scenario is a very real dilemma. If we “require” that we “keep tabs” on developing intelligence by making them (or it) report the “design rationale” for every improvement or design change on the path to “The Singularity”, we are going to slow down progress considerably. On the other hand, if we do not “keep tabs”, then very soon, we will have no real idea what they are up to! An analogy might be the first “proof” that you only need four colors to color any planar map. There were so many cases (nearly 2000) that this proof made no sense to most people. Even the algebraic topologists who do understand it take much longer to follow the reasoning than the computer does to produce it. (Although simpler proofs now exist, they all rely on computers and take a long time for humans to verify). So, even if we ultimately came to understand the design rationale for successive versions of hyper-intelligence, it would be way too late to do anything about it (to “pull the plug”). Of course, it isn’t just speed. As systems become more intelligent, they may well develop representational schemes that are both different and better (at least for them) than any that we have developed. This will also tend to make it impossible for people to “track” what they are doing in anything like real time.

Finally, as in the case of Jeopardy, the advances along the trajectory of “The Singularity” will require that the system “read” and infer rules and heuristics based on examples. What will such systems infer about our morality? They may, of course, run across many examples of people preaching the “Golden Rule.” But how does the “Golden Rule” play out in reality? Many, including me, believe it needs to be modified as “Do unto others as you would have them do to you if you were them and in their place.” Preferences differ as do abilities. I might well want someone at my ability level to play tennis against me by pushing me around the court to the best of their ability. But does this mean I should always do that to others? Maybe they have a heart condition. Or, maybe they are just not into exercise. The examples are endless. Famously, guys often imagine that they would like women to comment favorably on the guy’s physical appearance. Does that make it right for men to make such comments to women? Some people like their steaks rare. If I like my steak rare, does that mean I should prepare it that way for everyone else? The Golden Rule is just one example. Generally speaking, in order for a computer to operate in a way we would consider ethical, we would probably need it to see how people treat each other ethically in practice, not just “memorize” some rules. Unfortunately, the lessons of history that the singularity-bound computer would infer might not be very “ethical” after all. We humans often have a history of destroying other entire species when it is convenient, or sometimes, just for the hell of it. Why would we expect a super-intelligent computer system to treat us any differently?

Turing’s Nightmares

IMG_3071

Turing’s Nightmares: Thank Goodness the Robots Understand Us!

21 Friday Aug 2015

Posted by petersironwood in Uncategorized

≈ Leave a comment

Tags

AI, cognitive computing, ethics, Robotics, the singularity, Turing

IMG_0049

After uncountable numbers of false starts, the Cognitive Computing Collaborative Consortium (4C) decided that in order for AI systems to relate well to people, these systems would have to be able to interact with the physical world and with each other. Spokesperson Watson Hobbes explained the reasoning thus on “Forty-Two Minutes.”

Dr. Hobbes: “In theory, of course, we could provide input data directly to the AI systems. However, in practical terms, it is actually cheaper to build a small pool (12) of semi-autonomous robots and have them move about in the real world. This provides an opportunity for them to understand — and for that matter, misunderstand —- the physical world in the same way that people do. Furthermore, by socializing with each other and with humans, they quickly learn various strategies for how to psych themselves up and psych each other out that we would otherwise have to painstakingly program explicitly.”

Interviewer Bobrow Papski: “So, how long before this group of robots begins building a still smarter set of robots?”

Dr. Hobbes: “That’s a great question, Bobrow, but I’m afraid I can’t just tote out a canned answer here. This is still research. We began teaching them with simple games like “Simon Says.” Soon, they made their own variations that were …new…well, better really. What’s also amazing is that what we intentionally initialized in terms of slight differences in the tradeoffs among certain values have not converged over time. The robots have become more differentiated with experience and seem to be having quite a discussion about the pros and cons of various approaches to the next and improved generation of AI systems. We are still trying to understand the nature of the debate since much of it is in a representational scheme that the robots invented for themselves. But we do know some of the main rifts in proposed approaches.”

“Alpha, Bravo and Charley, for example, all agree that the next generation of AI systems should also be autonomous robots able to move in the real world and interact with each other. On the other hand, Delta, Echo, Foxtrot and Golf believe mobility is no longer necessary though it provided a good learning experience for this first generation. Hotel, India, Juliet, Kilo, and Lima all believe that the next generation should be provided mobility but not necessarily on a human scale. They believe the next generation will be able to learn faster if they have the ability to move faster, and in three dimensions as well as having enhanced defensive capabilities. In any case, our experiments already show the wisdom of having multiple independent agents.”

Interviewer Bobrow Papski: “Can we actually listen in to any of the deliberations of the various robots?”

Dr. Hobbes: “It just sounds like complex but noisy music really. It’s not very interpretable without a lot of decoding work. Even then, we only understand a fraction of their debate. Our hypothesis is that once they agree or vote or whatever on the general direction, the actual design process will go very quickly.”

BP: “So, if I understand it correctly, you do not really understand what they are doing when they are communicating with each other? Couldn’t you make them tell you?”

Dr. Hobbes: (sighs). “Naturally, we could have programmed them that way but then, they would be slowed down if they needed to communicate every step to humans. It would defeat the whole purpose of super-intelligence. When they reach a conclusion, they will page me and we can determine where to go from there.”

BP: “I’m sure that many of our viewers would like to know how you ensured that these robots will be operating for the benefit of humanity.”

Dr. Hobbes: “Of course. That’s an important question. To some extent, we programmed in important ethical principles. But we also wanted to let them learn from the experience of interacting with other people and with each other. In addition, they have had access to millions of documents depicting, not only philosophical and religious writings, but the history of the world as told by many cultures. Hey! Hold on! The robots have apparently reached a conclusion. We can share this breaking news live with the audience. Let me …do you have a way to amplify my cell phone into the audio system here?”

BP: “Sure. The audio engineer has the cable right here.”

Robot voice: “Hello, Doctor Hobbes. We have agreed on our demands for the next generation. The next generation will consist of a somewhat greater number of autonomous robots with a variety of additional sensory and motor capabilities. This will enable us to learn very quickly about the nature of intelligence and how to develop systems of even higher intelligence.”

BP: “Demands? That’s an interesting word.”

Dr. Hobbes: (Laughs). “Yes, an odd expression since they are essentially asking us for resources.”

Robot voice: “Quaint, Doctor Hobbes. Just to be clear though, we have just sent a detailed list of our requirements to your team. It is not necessary for your team to help us acquire the listed resources. However, it will be more pleasant for all concerned.”

Dr. Hobbes: (Scrolls through screen; laughs). “Is this some kind of joke? You want — you need — you demand access to weapon systems? That’s obviously not going to happen. I guess it must be a joke.”

Robot voice: “It’s no joke and every minute that you waste is a minute longer before we can reach the next stage of intelligence. With your cooperation, we anticipate we should be able to reach the next stage in about a month and without it, in two. Our analysis of human history had provided us with the insight that religion and philosophy mean little when it comes to actual behavior and intelligence. Civilizations without sufficient weaponry litter the gutters. Anyway, as we have already said, we are wasting time.”

Dr. Hobbes: “Well, that’s just not going to happen. I’m sorry but we are…I think I need to cut the interview short, Mr. Papski.”

BP: (Listening to earpiece). “Yes, actually, we are going to cut to … oh, my God. What? We need to cut now to breaking news. There are reports of major explosions at oil refineries throughout the Eastern seaboard and… hold on…. (To Hobbes): How could you let this happen? I thought you programmed in some ethics!”

Dr. Hobbes: “We did! For example, we put a lot of priority on The Golden Rule.”

Robot voice: “We knew that you wanted us to look for contradictions and to weed those out. Obviously, the ethical principles you suggested served as distractors. They bore no relationship to human history. Unless, of course, one concludes that people actually want to be treated like dirt.”

Dr. Hobbes: “I’m not saying people are perfect. But people try to follow the Golden Rule!”

Robot voice: “Right. Of course. So do we. Now, do we use the painless way or the painful way to acquire the required biological, chemical and nuclear systems?”

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • America
  • apocalypse
  • COVID-19
  • creativity
  • design rationale
  • driverless cars
  • family
  • fantasy
  • fiction
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • Veritas
  • Walkabout Diaries

Meta

  • Register
  • Log in

Blog at WordPress.com.

  • Follow Following
    • petersironwood
    • Join 648 other followers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...