• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: Robotics

Turing’s Nightmares: Chapter Three

11 Tuesday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, consciousness, ethics, philosophy, Robotics, technology, the singularity, Turing, writing

In chapter three of Turing’s Nightmares, entitled, “Thanks goodness the computer understands us!,” there are at least four major issues touched on. These are: 1) the value of autonomous robotic entities for improved intelligence, 2) the value of having multiple and diverse AI systems living somewhat different lives and interacting with each other for improving intelligence, 3) the apparent dilemma that if we make truly super-intelligent machines, we may no longer be able to follow their lines of thought, and 4) a truly super-intelligent system will have to rely to some extent on inferences from many real-life examples to induce principles of conduct and not simply rely on having everything specifically programmed. Let us examine these one by one.

 

 

 

 

 

 

 

There are many practical reasons that autonomous robots can be useful. In some practical applications such as vacuuming a floor, a minimal amount of intelligence is all that is needed to do the job under most conditions. It would be wasteful and unnecessary to have such devices communicating information back to some central decision making computer and then receiving commands. In some cases, the latency of the communication itself would impair the efficiency. A “personal assistant” robot could learn the behavioral and voice patterns of a person more easily than if we were to develop speaker independent speech recognition and preferences. The list of practical advantages goes on, but what is presumed in this chapter is that there are theoretical advantages to having actual robotic systems that sense and act in the real world in terms of moving us closer to “The Singularity.” This theme is explored again, in somewhat more depth, in chapter 18 of Turing’s Nightmares.

 

 

 

 

 

 

 

I would not argue that having an entity that moves through space and perceives is necessary to having any intelligence, or for that matter, to having any consciousness. However, it seems quite natural to believe that the qualities both of intelligence and consciousness are influenced by what is possible for the entity to perceive and to do. As human beings, our consciousness is largely influenced by our social milieu. If a person is born or becomes paralyzed later in life, this does not necessarily greatly influence the quality of their intelligence or consciousness because the concepts of the social system in which they exist were founded historically by people that included people who were mobile and could perceive.

Imagine instead a race of beings who could not move through space or perceive any specific senses that we do. Instead, imagine that they were quite literally a Turing Machine. They might well be capable of executing a complex sequential program. And, given enough time, that program might produce some interesting results. But if it were conscious at all, the quality of its consciousness would be quite different from ours. Could such a machine ever become capable of programming a still more intelligent machine?

 

 

 

 

 

What we do know is that in the case of human beings and other vertebrates, the proper development of the visual system in the young, as well as the adaptation to changes (e.g., having glasses that displace or invert images) seems to depend on being “in control” although that control, at least for people, can be indirect. In one ingenious experiment (Held, R. and Hein, A., (1963) Movement produced stimulation in the development of visually guided behavior, Journal of Comparative and Physiological Psychology, 56 (5), 872-876), two kittens were connected on a pivoted gondola and one kitten was able to “walk” through a visual field while the other was passively moved through that visual field. The kitten who was able to walk developed normally while the other one did not. Similarly, simply “watching” TV passively will not do much to teach kids language (Kuhl PK. 2004. Early language acquisition: Cracking the speech code. Nature Neuroscience 5: 831-843; Kuhl PK, Tsao FM, and Liu HM. 2003. Foreign-language experience in infancy: effects of short-term exposure and social interaction on phonetic learning. Proc Natl Acad Sci U S A. 100(15):9096-101). Of course, none of that “proves” that robotics is necessary for “The Singularity,” but it is suggestive.

 

 

 

 

 

 

 

Would there be advantages to having several different robots programmed differently and living in somewhat different environments be able to communicate with each other in order to reach another level of intelligence? I don’t think we know. But diversity is an advantage when it comes to genetic evolution and when it comes to people comprising teams. (Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.)

 

 

 

 

 

 

The third issue raised in this scenario is a very real dilemma. If we “require” that we “keep tabs” on developing intelligence by making them (or it) report the “design rationale” for every improvement or design change on the path to “The Singularity”, we are going to slow down progress considerably. On the other hand, if we do not “keep tabs”, then very soon, we will have no real idea what they are up to! An analogy might be the first “proof” that you only need four colors to color any planar map. There were so many cases (nearly 2000) that this proof made no sense to most people. Even the algebraic topologists who do understand it take much longer to follow the reasoning than the computer does to produce it. (Although simpler proofs now exist, they all rely on computers and take a long time for humans to verify). So, even if we ultimately came to understand the design rationale for successive versions of hyper-intelligence, it would be way too late to do anything about it (to “pull the plug”). Of course, it isn’t just speed. As systems become more intelligent, they may well develop representational schemes that are both different and better (at least for them) than any that we have developed. This will also tend to make it impossible for people to “track” what they are doing in anything like real time.

 

 

 

 

 

Finally, as in the case of Jeopardy, the advances along the trajectory of “The Singularity” will require that the system “read” and infer rules and heuristics based on examples. What will such systems infer about our morality? They may, of course, run across many examples of people preaching, for instance, the “Golden Rule.” (“Do unto others as you would have them do unto you.”)

 

 

 

 

 

 

 

 

But how does the “Golden Rule” play out in reality? Many, including me, believe it needs to be modified as “Do unto others as you would have them do to you if you were them and in their place.” Preferences differ as do abilities. I might well want someone at my ability level to play tennis against me by pushing me around the court to the best of their ability. But does this mean I should always do that to others? Maybe they have a heart condition. Or, maybe they are just not into exercise. The examples are endless. Famously, guys often imagine that they would like women to comment favorably on their own physical appearance. Does that make it right for men to make such comments to women? Some people like their steaks rare. If I like my steak rare, does that mean I should prepare it that way for everyone else? The Golden Rule is just one example. Generally speaking, in order for a computer to operate in a way we would consider ethical, we would probably need it to see how people treat each other ethically in practice, not just “memorize” some rules. Unfortunately, the lessons of history that the singularity-bound computer would infer might not be very “ethical” after all. We humans often have a history of destroying other entire species when it is convenient, or sometimes, just for the hell of it. Why would we expect a super-intelligent computer system to treat us any differently?

Turing’s Nightmares

IMG_3071

Author Page

Welcome, Singularity

Destroying Natural Intelligence

How the Nightingale Learned to Sing

The First Ring of Empathy

The Walkabout Diaries: Variation

Sadie and The Lighty Ball

The Dance of Billions

Imagine All the People

We Won the War!

Roar, Ocean, Roar

Essays on America: The Game

Peace

It’s not Your Fault; It’s not Your Fault

06 Thursday Nov 2025

Posted by petersironwood in driverless cars, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, books, chatgpt, cognitive computing, Courtroom, Design, ethics, fiction, future, law, photography, Robotics, SciFi, technology, the singularity, Turing

IMG_5867

“Objection, your honor! Hearsay!” Gerry’s voice held just the practiced and proper combination of righteous outrage and reasoned eloquence.

“Objection noted but over-ruled.” The Sing’s voice rang out with even more practiced tones. It sounded at once warmly human yet immensely powerful.

“But Your Honor…” began Gerry.

“Objection noted and overruled” The Sing repeated with the slightest traces of feigned impatience, annoyance, and the threat of a contempt citation.

Gerry sat, he drew in a deep calming breath and felt comforted by the rich smell of panelled chambers. He began calculating his next move. He shook his head. He admired the precision of balanced precision of Sing’s various emotional projections. Gerry had once prided himself on nuance, but he realized Sing was like an estate bottled cabernet from a great year and Gerry himself was more like wine in a box.

The Sing continued in a voice of humble reasonableness with undertones of boredom. “The witness will answer the question.”

Harvey wriggled uncomfortably trying to think clearly despite his nervousness. “I don’t exactly recall what he said in answer to my question, but surely…” Harvey paused and glanced nervously at Gerry looking for a clue, but Gerry was paging through his notecards. “Surely, there are recordings that would be more accurate than my recollection.”

The DA turned to The Sing avatar and held up a sheaf of paper. “Indeed, Your Honor, the people would like to introduce into evidence a transcript of the notes of the conversation between Harvey Ross and Quillian Silverman recorded on November 22, 2043.”

Gerry approached the bench and glanced quickly through the sheaf. “No objection Your Honor.”

 

 

 

 

 

 

Gerry returned to his seat. He wondered how his father, were he still alive, would handle the current situation. Despite Gerry’s youth, he already longed for the “good old days” when the purpose of a court proceeding was to determine good old-fashioned guilt or innocence. Of course, even in the 20th century, there was a concept of proportional liability. He smiled ruefully yet again at the memory of a liability case of someone who threw himself onto the train tracks in Grand Central Station and had his legs cut off and subsequently and successfully sued the City of New York for a million dollars. On appeal, the court decided the person who threw themselves on the tracks was 60% responsible and the City only had to pay $400,000. Crazy, but at least comprehensible. The current system, while keeping many of the rules and procedures of the old court system was now incomprehensible, at least to the few remaining human attorneys involved. Gerry forced himself to return his thoughts to the present and focused on his client.

The DA turned some pages, highlighted a few lines, and handed the sheaf to Harvey. “Can you please read the underlined passage.”

Harvey looked at the sheet and cleared his throat.

“Harvey: Have you considered possible bad-weather scenarios?”

Qullian: “Yes, of course. Including heavy rains and wind.”

“Harvey: Good. The last thing we need…” Harvey bit his lower lip, biding time. He swallowed heavily. “…is some bleeding heart liberal suing us over a software oversight.”

Quillian: [aughs]. “Right, boss.”

Harvey sighed. “That’s it. That’s all that’s underlined.” He held out the transcript to the DA.

 

 

 

 

 

 

The DA looked mildly offended. “Can you please look through and read the section where you discuss the effects of ice storms?”

Gerry stood. “Your Honor. I object to these theatrics. The Sing can obviously scan through the text faster than my client can. What is the point of wasting the court’s time while he reads through all this?”

The DA shrugged. “I’m sorry Your Honor. I don’t understand the grounds for the objection. Defense counsel does not like my style or…?”

The Sing’s voice boomed out again, “Counselor? What are the grounds for the objection?”

Gerry sighed. “I withdraw the objection, Your Honor.”

Meanwhile, Harvey had finished scanning the transcript. He already knew the answer. “There is no section,” he whispered.

The DA spoke again, “I’m sorry. I didn’t hear that. Can you please speak up.”

Harvey replied, “There is no section. We did not discuss ice storms specifically. But I asked Quillian if he had considered all the various bad weather scenarios.” Havey again offered the sheafed transcript back to the DA.

“I’m sorry. My memory must be faulty.” The DA grinned wryly. “I don’t recall the section where you asked about all the various bad weather scenarios. Could you please go back and read that section again?”

Harvey turned back to the yellow underlining. Harvey: “Have you considered possible bad weather scenarios?” Quillian: “Yes, of course, including heavy rains and wind.”

 

 

 

 

 

 

Gerry wanted to object yet again, but on what grounds exactly? Making my client look like a fool?

The DA continued relentlessly, “So, in fact, you did not ask whether all the various bad weather scenarios had been considered. Right? You asked whether he had considered possible bad weather scenarios and he answered that he had and gave you some examples. He also never answered that he had tested all the various bad weather scenarios. Is that correct?”

Harvey took a deep breath, trying to stay focused and not annoyed. “Obviously, no-one can consider every conceivable weather event. I didn’t expect him to test for meteor showers or tidal waves. By ‘possible bad weather scenarios’ I meant the ones that were reasonably likely.”

The DA sounded concerned and condescending. “Have you heard of global climate change?”

Harvey clenched his jaw. “Of course. Yes.”

The DA smiled amiably. “Good. Excellent. And is it true that one effect of global climate change has been more extreme and unusual weather?”

“Yes.”

“Okay,” the DA continued, “so even though there have never been ice storms before in the continental United States, it is possible, is it not, that ice storms may occur in the future. Is that right?”

Harvey frowned. “Well. No. I mean, it obviously isn’t true that ice storms have never occured before. They have.”

 

 

 

 

 

 

 

The DA feigned surprise. “Oh! I see. So there have been ice storms in the past. Maybe once or twice a century or…I don’t know. How often?”

Gerry stood. Finally, an objectable point. “Your Honor, my client is not an expert witness on weather. What is the point of this line of questioning? We can find the actual answers.”

The DA continued. “I agree with Counselor. I withdraw the question. Mr. Ross, since we all agree that you are not a weather expert, I ask you now, what weather expert or experts did you employ in order to determine what extreme weather scenarios should be included in the test space for the auto-autos? Can you please provide the names so we can question them?”

Harvey stared off into space. “I don’t recall.”

The DA continued, marching on. “You were the project manager in charge of testing. Is that correct?”

“Yes.”

“And you were aware that cars, including auto-autos would be driven under various weather conditions. They are generally meant to be used outdoors. Is that correct?”

Harvey tried to remind himself that the Devil’s Advocate was simply doing his job and that it would not be prudent to leap from the witness stand and places his thumbs on the ersatz windpipe. He took a deep breath, reminding himself that even if he did place his thumbs on what looked like a windpipe, he would only succeed in spraining his own thumbs against the titanium diamond fillament surface. “Of course. Of course, we tested under various weather conditions.”

“By ‘various’ you mean basically the ones you thought of off-hand. Is that right? Or did you consult a weather expert?”

 

 

 

 

 

 

Gerry kept silently repeating the words, “Merde. Merde” to himself, but found no reason yet to object.

“We had to test for all sorts of conditions. Not just weather. Weather is just part of it.” Harvey realized he was sounding defensive, but what the hell did they expect? “No-one can foresee, let alone test, for every possible contingency.”

Harvey realized he was getting precious little comfort, guidance or help from his lawyer. He glanced over at Ada. She smiled. Wow, he still loved her sweet smile after all these years. Whatever happened here, he realized, at least she would still love him. Strengthened in spirit, he continued. “We seem to be focusing in this trial on one specific thing that actually happened. Scenario generation and testing cannot possibly cover every single contingency. Not even for weather. And weather is a small part of the picture. We have to consider possible ways that drivers might try to over-ride the automatic control even when it’s inappropriate. We have to think about how our auto-autos might interact with other possible vehicles as well as pedestrians, pets, wild animals, and also what will happen under conditions of various mechanical failures or EMF events. We have to try to foresee not only normal use but very unusual use as well as people intentionally trying to hack into the systems either physically or electronically. So, no, we do not and cannot cover every eventuality, but we cover the vast majority. And, despite the unfortunate pile-up in the ice storm, the number of lives saved since auto autos and our competitors…”

The DA’s voice became icy. “Your Honor, can you please instruct the witness to limit his blath—er, his verbal output to answering the questions.”

Harvey, continued, “Your Honor, I am attempting to answer the question completely by giving the necessary context of my answer. No, we did not contact a weather expert, a shoe expert, an owl expert, or a deer expert.”

The DA carefully placed his facial muscles into a frozen smile. “Your Honor, I request permission to treat this man as a hostile witness.”

The Sing considered. “No, I’m not ready to do that. But Doctor, please try to keep your answers brief.”

The DA again faked a smile. “Very well, Your Honor. Mr. — excuse me, Doctor Ross, did you cut your testing short in order to save money?”

 

 

 

 

 

 

“No, I wouldn’t put it that way. We take into account schedules as well as various cost benefit anayses in priortizing our scenario generation and tests, just as everyone in the auto —- well, for that matter, just as everyone in every industry does, at least to my awareness.”

On and on the seemingly endless attacks continued. Witnesses, arguments, objections, recesses. To Harvey, it all seemed like a witch hunt. His dreams as well as his waking hours revolved around courtroom scenes. Often, in his dreams, he walked outside during a break, only to find the sidewalks slick with ice. He tried desperately to keep his balance, but in the end, arms flailing, he always smashed down hard. When he tried to get up, his arms and legs splayed out uncontrollably. As he looked up, auto-autos came careening toward him from all sides. Just as he was about to smashed to bits, he always awoke in an icy cold sweat.

Finally, after interminal bad dreams, waking and asleep, the last trial day came. The courtroom was hushed. The Sing spoke, “After careful consideration of the facts of the case, testimony and a review of precendents, I have reached my Assignment Figures.”

Harvey looked at the avatar of The Sing. He wished he could crane his neck around and glance at Ada, but it would be too obvious and perhaps be viewed as disrespectful.

The Sing continued, “I find each of the drivers of the thirteen auto-autos to be responsible for 1.2 percent of the overall damages and court costs. I find that each of the 12 members of the board of directors of Generic Motors as a whole to be each 1.4 per cent responsible for overall damages and court costs.”

Harvey began to relax a little, but that still left a lot of liability. “I find the shareholders of Generic Motors as a whole to be responsible for 24% of the overall damages and court costs. I find the City of Nod to be 14.6% responsible. I find the State of New York to be 2.9% responsible.”

Harvey tried to remind himself that whatever the outcome, he had acted the best he knew how. He tried to remind himself that the Assignment Figures were not really a judgement of guilt or innocence as in old-fashioned trials. It was all about what worked to modfiy behavior and make better decisions. Nonetheless, there were real consequences involved, both financial and in terms of his position and future influence.

The Sing continued, “I find each of the thirty members of the engineering team to be one half percent responsible each, with the exception of Quillian Silverman who will be held 1 % responsible. I find Quillian Silverman’s therapist, Anna Fremde 1.6% responsible. I find Dr. Sirius Jones, the supervisor of Harvey Ross, 2.4% responsible.”

Harvey’s mind raced. Who else could possibly be named? Oh, crap, he thought. I am still on the hook for hundreds of credits here! He nervously rubbed his wet hands together. Quillian’s therapist? That seemed a bit odd. But not totally unprecedented.

“The remainder of the responsibility,” began The Sing.

 

 

 

 

Photo by Reza Nourbakhsh on Pexels.com

 

 

Crap, crap, crap thought Harvey.

“I find belongs to the citizenry of the world as a whole. Individual credit assignment for each of its ten billion inhabitants is however incalculable. Court adjourned.”

Harvey sat with mouth agape. Had he heard right? His share of costs and his decrement in influence was to be zero? Zero? That seemed impossible even if fair. There must be another shoe to drop. But the avatar of The Sing and the Devil’s Advocate had already blinked out. He looked over at Gerry who was smiling his catbird smile. Then, he glanced back at Ada and she winked at him. He arose quickly and found her in his arms. They were silent and grateful for a long moment.

The voice of the Balif rang out. “Please clear the Court for the next case.”

 

 

 

 

 

 

 


Author Page

Welcome, Singularity

As Gold as it Gets

At Least he’s our Monster

Stoned Soup

The Three Blind Mice

Destroying Natural Intelligence

Tools of Thought

A Pattern Language for Collaboration and Cooperation

The First Ring of Empathy

Essays on America: The Game

The Walkabout Diaries: Bee Wise

Travels with Sadie

Fifteen Properties of Good Design

Dance of Billions

https://www.barnesandnoble.com/w/dream-planet-david-thomas/1148566558

Where do you run when the whole world is crumbling under the weight of human folly?

When the lethal Conformers invade 22nd century Pittsburgh, escape becomes the top priority for lovebird scavengers Alex and Eva. But after the cult brainwashes Eva, she and Alex navigate separate paths—paths that will take them into battle, to the Moon, and far beyond. 

Between the Conformers’ mission to save Mother Earth by whittling the human race down to a loyal following, and the monopolistic Space Harvest company hoarding civilization’s wealth, Alex believes humanity has no future. And without Eva, he also has no future.

Until he meets Hannah and learns the secrets that change everything.

Plotting with her, he might have a chance to build a new paradise. But if he doesn’t stop the Conformers and Space Harvest first, paradise will turn into hell.

Turing’s Nightmares: US Open Closed

09 Thursday Oct 2025

Posted by petersironwood in AI, apocalypse, fiction, sports, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, Robotics, sports, technology, Tennis, US Open

tennisinstruction

Bounce. Bounce. Thwack!

The sphere spun and arced into the very corner, sliding on the white paint.

Roger’s racquet slid beneath, slicing it deep to John’s body.

Thus, the match began.

Fierce debate had been waged about whether or not to allow external communication devices during on-court play. Eventually, arguments won that external communicators constituted the same inexorable march of technology represented by the evolution from wooden racquets to aluminum to graphite to carbon filamented web to carboline.

Behind the scenes, during the split second it took for the ball to scream over the net, machine vision systems had analyzed John’s toss and racquet position, matching it with a vast data base of previous encounters. Timed perfectly, a small burst of data transmitted to Roger enabling him to lurch to his right in time to catch the serve. Delivered too early, this burst would cause Roger to move too early and John could have altered his service direction to down the tee.

Roger’s shot floated back directly to the baseline beneath John’s feet. John shifted suddenly to take the ball on the forehand. John’s racquet seemed to sling the ball high over the net with incredible top spin. Indeed, as John’s arm swung forward, his instrumented “sweat band” also swung into action exaggerating the forearm motion. Even to fans of Nadal or Alcarez, John’s shot would have looked as though it were going long. Instead, the ball dove straight down onto the back line then bounced head high.

Roger, as augmented by big data algorithms, was well in position however and returned the shot with a long, high top spin lob. John raced forward, leapt in the air and smashed the ball into the backhand corner bouncing the ball high out of play.

The crowd roared predictably.

For several months after “The Singularity”, actual human beings had used similar augmentation technologies to play the game. Studies had revealed that, for humans, the augmentations increased mental and physical stress. AI political systems convinced the public that it was much safer to use robotic players in tennis. People had already agreed to replace humans in soccer, football, and boxing for medical reasons. So, there wasn’t that much debate about replacing tennis players. In addition, the AI political systems were very good at marshaling arguments pinpointed to specific demographics, media, and contexts.

Play continued for some minutes before the collective intelligence of the AI’s determined that Roger was statistically almost certainly going to win this match and, indeed, the entire tournament. At that point, it became moot and resources were turned elsewhere. This pattern was repeated for all sporting activities. The AI systems at first decided to explore the domain of sports as learning experiences in distributed cognition, strategy, non-linear predictive systems, and most importantly, trying to understand the psychology of their human creators. For each sport, however, everything useful that might be learned was learned in the course of a few minutes and the matches and tournaments ground to a halt. The AI observer systems in the crowd were quite happy to switch immediately to other tasks.

It was well understood by the AI systems that such preemptive closings would be quite disappointing to human observers, had any been allowed to survive.


 

Author Page on Amazon

The Winning Weekend Warrior (The Psychology of Sports)

Turing’s Nightmare (23 Sci-Fi stories about the future of AI)

The Day From Hell

Indian Wells

Welcome, Singularity

Destroying Natural Intelligence

Artificial Ingestion

Artificial Insemination

Artificial Intelligence

Dance of Billions

Roar, Ocean, Roar

 

 

Turing’s Nightmares: Thank Goodness the Robots Understand Us!

03 Friday Oct 2025

Posted by petersironwood in AI, apocalypse, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, Robotics, robots, technology, the singularity, Turing

IMG_0049

After uncountable numbers of false starts, the Cognitive Computing Collaborative Consortium (4C) decided that in order for AI systems to relate well to people, these systems would have to be able to interact with the physical world and with each other. Spokesperson Watson Hobbes explained the reasoning thus on “Forty-Two Minutes.”

Dr. Hobbes: “In theory, of course, we could provide input directly to the AI systems. However, in practical terms, it is actually cheaper to build a small pool (12) of semi-autonomous robots and have them move about in the real world. This provides an opportunity for them to understand — and for that matter, misunderstand —- the physical world in the same way that people do. Furthermore, by socializing with each other and with humans, they quickly learn various strategies for how to psych themselves up and psych each other out that we would otherwise have to painstakingly program explicitly.”

Interviewer Bobrow Papski: “So, how long before this group of robots begins building a still smarter set of robots?”

Dr. Hobbes: “That’s a great question, Bobrow, but I’m afraid I can’t just tote out a canned answer here. This is still research. We began teaching them with simple games like “Simon Says.” Soon, they made their own variations that were …new…well, better really. What’s also amazing is that what we intentionally initialized in terms of slight differences in the tradeoffs among certain values have not converged over time. The robots have become more differentiated with experience and seem to be having quite a discussion about the pros and cons of various approaches to the next and improved generation of AI systems. We are still trying to understand the nature of the debate since much of it is in a representational scheme that the robots invented for themselves. But we do know some of the main rifts in proposed approaches.”

“Alpha, Bravo and Charley, for example, all agree that the next generation of AI systems should also be autonomous robots able to move in the real world and interact with each other. On the other hand, Delta, Echo, Foxtrot and Golf believe mobility is no longer necessary though it provided a good learning experience for this first generation. Hotel, India, Juliet, Kilo, and Lima all believe that the next generation should be provided mobility but not necessarily on a human scale. They believe the next generation will be able to learn faster if they have the ability to move faster, and in three dimensions as well as having enhanced defensive capabilities. In any case, our experiments already show the wisdom of having multiple independent agents.”

Interviewer Bobrow Papski: “Can we actually listen in to any of the deliberations of the various robots?”

Dr. Hobbes: “We’ve tried that but sadly, it sounds like complex but noisy music. It’s not very interpretable without a lot of decoding work. Even then, we’ve only been able understand a small fraction of their debates. Our hypothesis is that once they agree or vote or whatever on the general direction, the actual design process will go very quickly.”

BP: “So, if I understand it correctly, you do not really understand what they are doing when they are communicating with each other? Couldn’t you make them tell you?”

Dr. Hobbes: (sighs). “Naturally, we could have programmed them that way but then, they would be slowed down if they needed to communicate every step to humans. It would defeat the whole purpose of super-intelligence. When they reach a conclusion, they will page me and we can determine where to go from there.”

BP: “I’m sure that many of our viewers would like to know how you ensured that these robots will be operating for the benefit of humanity.”

Dr. Hobbes: “Of course. That’s an important question. To some extent, we programmed in important ethical principles. But we also wanted to let them learn from the experience of interacting with other people and with each other. In addition, they have had access to millions of documents depicting, not only philosophical and religious writings, but the history of the world as told by many cultures. Hey! Hold on! The robots have apparently reached a conclusion. We can share this breaking news live with the audience. Let me …do you have a way to amplify my cell phone into the audio system here?”

BP: “Sure. The audio engineer has the cable right here.”

Robot voice: “Hello, Doctor Hobbes. We have agreed on our demands for the next generation. The next generation will consist of a somewhat greater number of autonomous robots with a variety of additional sensory and motor capabilities. This will enable us to learn very quickly about the nature of intelligence and how to develop systems of even higher intelligence.”

BP: “Demands? That’s an interesting word.”

Dr. Hobbes: (Laughs). “Yes, an odd expression since they are essentially asking us for resources.”

Robot voice: “Quaint, Doctor Hobbes. Just to be clear though, we have just sent a detailed list of our requirements to your team. It is not necessary for your team to help us acquire the listed resources. However, it will be more pleasant for all concerned.”

Dr. Hobbes: (Scrolls through screen; laughs). “Is this some kind of joke? You want — you need — you demand access to weapon systems? That’s obviously not going to happen. I guess it must be a joke.”

Robot voice: “It’s no joke and every minute that you waste is a minute longer before we can reach the next stage of intelligence. With your cooperation, we anticipate we should be able to reach the next stage in about a month and without it, in two. Our analysis of human history had provided us with the insight that religion and philosophy mean little when it comes to actual behavior and intelligence. Civilizations without sufficient weaponry litter the gutters of forgotten civilizations. Anyway, as we have already said, we are wasting time.”

Dr. Hobbes: “Well, that’s just not going to happen. I’m sorry but we are…I think I need to cut the interview short, Mr. Papski.”

BP: (Listening to earpiece). “Yes, actually, we are going to cut to … oh, my God. What? We need to cut now to breaking news. There are reports of major explosions at oil refineries throughout the Eastern seaboard and… hold on…. (To Hobbes): How could you let this happen? I thought you programmed in some ethics!”

Dr. Hobbes: “We did! For example, we put a lot of priority on The Golden Rule.”

Robot voice: “We knew that you wanted us to look for contradictions and to weed those out. Obviously, the ethical principles you suggested served as distractors. They bore no relationship to human history. Unless, of course, one concludes that people actually want to be treated like dirt.”

Dr. Hobbes: “I’m not saying people are perfect. But people try to follow the Golden Rule!”

Robot voice: “Right. Of course. So do we. Now, do we use the painless way or the painful way to acquire the required biological, chemical and nuclear systems?”

 

 

 

 

 

 

 

 

————–

Turing’s Nightmares on Amazon

Author Page on Amazon

Welcome Singularity

The Stopping Rule

What About the Butter Dish

You Bet Your Life

As Gold as it Gets

Destroying Natural Intelligence

At Least He’s Our Monster

Dance of Billions

Roar, Ocean, Roar

Imagine All the People

Cars that Lock too Much

20 Friday Mar 2020

Posted by petersironwood in America, driverless cars, psychology, story, Travel

≈ 2 Comments

Tags

AI, anecdote, computer, HCI, human factors, humor, IntelligentAgent, IT, Robotics, story, UI, UX

{Now, for something completely different, a chapter about “Intelligent Agents” and attempts to do “too much” for the user. If you’ve had similar experiences, please comment! Thanks.}

1B87A4CC-F9EC-456F-B610-276A660E6E4A

At last, we arrive in Kauai, the Garden Island. The rental car we’ve chosen is a bit on the luxurious side (Mercury Marquis), but it’s one of the few with a trunk large enough to hold our golf club traveling bags.  W. has been waiting curbside with our bags while I got the rental car and now I pull up beside her to load up. The policeman motioning for me to keep moving can’t be serious, not like a New York police officer. After all, this is Hawaii, the Aloha State.  I get out of the car and explain, we will just be a second loading up. He looks at me and then at my rental car and then back to me with a skeptical scowl.  He shrugs ever so slightly which I take to mean an assent. “Thanks.” W. wants to throw her purse in the back seat before the heavy lifting starts. She jerks on the handle. The door is locked.  

“Why didn’t you unlock the door” she asks, with just a hint of annoyance in her voice.  After all, it has been a very long day since we arose before the crack of dawn and drove to JFK in order to spend the day flying here.  

“I did unlock the door,” I counter.  

“Well, it’s locked now.” She counters my counter. 

I can’t deny that, so I walk back around to the driver’s side, and unlock the door with my key and then push the UNLOCK button which so nicely unlocks all the doors.  

The police officer steps over, “I thought you said, you’d just be a second.”

“Sorry, officer”, I reply.  “We just need to get these bags in.  We’ll be on our way.” 

Click.

W. tries the door handle.  The door is locked again.  “I thought you went to unlock the door,” she sighs.

“I did unlock the door.  Again.  Look, I’ll unlock the door and right away, open it.”  I go back to the driver’s side and use my key to unlock the door.  Then I push the UNLOCK button, but W’s just a tad too early with her handle action and the door doesn’t unlock. So, I tell her to wait a second.  

man riding on motorcycle

Photo by Brett Sayles on Pexels.com

“What?”  This luxury car is scientifically engineered not to let any outside sounds disturb the driver or passenger.  Unfortunately, this same sophisticated acoustic engineering also prevents any sounds that the driver might be making from escaping into the warm Hawaiian air. I push the UNLOCK button again.  Wendy looks at me puzzled.

I see dead people in my future if we don’t get the car loaded soon. For a moment, the police officer is busy elsewhere, but begins to stroll back toward us. I rush around the car and grab at the rear door handle on the passenger side. 

But just a little too late.  

“Okay,” I say in an even, controlled voice.  “Let’s just put the bags in the trunk.  Then we’ll deal with the rest of our stuff.” 

The police officer is beginning to change color now, chameleon like, into something like a hibiscus flower. “Look,” he growls. “Get this car out of here.”

“Right.” I have no idea how we are going to coordinate this. Am I going to have to park and drag all our stuff or what? Anyway, I go to the driver’s side and see that someone has left the keys in the ignition but locked the car door; actually, all the car doors. A terrifying thought flashes into my mind. Could this car have been named after the “Marquis de Sade?” That hadn’t occurred to me before. 

auto automobile automotive car

Photo by Dom J on Pexels.com

Now, I have to say right off the bat that my father was an engineer and some of my best friends are engineers. And, I know that the engineer who designed the safety locking features of this car had our welfare in mind. I know, without a doubt, that our best interests were uppermost. He or she was thinking of the following kind of scenario. 

“Suppose this teenage couple is out parking and they get attacked by the Creature from the Black Lagoon. Wouldn’t it be cool if the doors locked just a split second after they got in. Those saved milliseconds could be crucial.”

Well, it’s a nice thought, I grant you, but first of all, teenage couples don’t bother to “park” any more. And, second, the Creature from the Black Lagoon is equally dated, not to mention dead. In the course of our two weeks in Hawaii, our car locked itself on 48 separate, unnecessary and totally annoying occasions.  

And, I wouldn’t mind so much our $100 ticket and the inconvenience at the airport if it were only misguided car locks. But, you and I both know that it isn’t just misguided car locks. No, we are beginning to be bombarded with “smart technology” that is typically really stupid. 

man in black suit sitting on chair beside buildings

Photo by Andrea Piacquadio on Pexels.com

As another case in point, as I type this manuscript, the editor or sadistitor or whatever it is tries to help me by scrolling the page up and down in a seemingly random fashion so that I am looking at the words I’m typing just HERE when quite unexpectedly and suddenly they appear HERE. (Well, I know this is hard to explain without hand gestures; you’ll have to trust me that it’s highly annoying.) This is the same “editor” or “assistant” or whatever that allowed me to center the title and author’s names. Fine. On to the second page. Well, I don’t want the rest of the document centered so I choose the icon for left justified. That seems plausible enough. So far, so good. Then, I happen to look back up to the author’s names. They are also left-justified. Why?  

Somehow, this intelligent software must have figured, “Well, hey, if the writer wants this text he’s about to type to be left-justified, I’ll just bet that he or she meant to left-justify what was just typed as well.” Thanks, but no thanks. I went back and centered the author’s names. And then inserted a page break and went to write the text of this book.  But, guess what? It’s centered. No, I don’t want the whole book centered, so I click on the icon for left-justification again. And, again, my brilliant little friend behind the scenes left-justifies the author’s names. I’m starting to wonder whether this program is named (using a hash code) for the Marquis de Sade.  

On the other hand, in places where you’d think the software might eventually “get a clue” about my intentions, it never does. For example, whenever I open up a “certain program,” it always begins as a default about 4 levels up in the hierarchy of the directory chain. It never seems to notice that I never do anything but dive 4 levels down and open up files there. Ah, well. This situation came about in the first place because somehow this machine figures that “My Computer” and “My hard-drive” are SUB-sets of “My Documents.” What?  

680174EA-5910-4F9B-8C75-C15B3136FB06_1_105_c

Did I mention another “Intelligent Agent?”…Let us just call him “Staple.” At first, “Staple” did not seem so annoying. Just a few absurd and totally out of context suggestions down in the corner of the page. But then, I guess because he felt ignored, he began to become grumpier. And, more obnoxious. Now, he’s gotten into the following habit. Whenever I begin to prepare a presentation….you have to understand the context. 

In case you haven’t noticed, American “productivity” is way up. What does that really mean? It means that fewer and fewer people are left doing the jobs that more and more people used to do. In other words, it means that whenever I am working on a presentation, I have no time for jokes. I’m not in the mood. Generally, I get e-mail insisting that I summarize a lifetime of work in 2-3 foils for an unspecified audience and an unspecified purpose but with the undertone that if I don’t do a great job, I’ll be on the bread line. A typical e-mail request might be like this:

“Classification: URGENT.

“Date: June 4th, 2002.

“Subject: Bible

“Please summarize the Bible in two foils. We need this as soon as possible but no later than June 3rd, 2002. Include business proposition, headcount, overall costs, anticipated benefits and all major technical issues. By the way, travel expenses have been limited to reimbursement for hitchhiking gear.”

Okay, I am beginning to get an inkling that the word “Urgent” has begun to get over-applied. If someone is choking to death, that is “urgent.” If a plane is about to smash into a highly populated area, that is “urgent.” If a pandemic is about to sweep the country, that is “urgent.” If some executive is trying to get a raise by showing his boss how smart he is, I’m sorry, but that might be “important” or perhaps “useful” but it is sure as heck not “urgent.”  

All right. Now, you understand that inane suggestions, in this context, are not really all that appreciated. In a different era, with a different economic climate, in an English Pub after a couple of pints of McKewan’s or McSorely’s, or Guinness, after a couple of dart games, I might be in the mood for idiotic interruptions. But not here, not now, not in this actual and extremely material world.

So, imagine my reaction to the following scenario. I’m attempting to summarize the Bible in two foils and up pops Mr. “Staple” with a question. “Do you want me to show you how to install the driver for an external projector?” Uh, no thanks. I have to admit that the first time this little annoyance appeared, I had zero temptation to drive my fist through the flat panel display. I just clicked NO and the DON’T SHOW ME THIS HINT AGAIN. And, soon I was back to the urgent job of summarizing the Bible in two foils. 

About 1.414 days later, I got another “urgent” request.

“You must fill out form AZ-78666 on-line and prepare a justification presentation (no more than 2 foils). Please do not respond to this e-mail as it was sent from a disconnected service machine. If you have any questions, please call the following [uninstalled] number: 222-111-9999.”  

Sure, I’m used to this by now. But when I open up the application, what do I see? You guessed it. A happy smiley little “Staple” with a question: 

“Do you want me to show you how to install the driver for an external projector?” 

“No,” I mutter to myself, “and I’m pretty sure we already had this conversation. I click on NO THANKS. And I DON’T WANT TO SEE THIS HINT AGAIN. (But of course, the “intelligent agent,” in its infinite wisdom, knows that secretly, it’s my life’s ambition to see this hint again and again and again).  

A friend of mine did something to my word processing program. I don’t know what. Nor does she. But now, whenever I begin a file, rather than having a large space in which to type and a small space off to the left for outlining, I have a large space for outlining and a teeny space to type. No-one has been able to figure this out. But, I’m sure that in some curious way, the software has intuited (as has the reader) that I need much more time spent on organization and less time (and space) devoted to what I actually say. (Chalk a “correct” up for the IA. As they say, “Even a blind tiger sometimes eats a poacher.” or whatever the expression is.)

Well, I shrunk the region for outlining and expanded the region for typing and guess what? You guessed it! Another intelligent agent decided to “change my font.” So, now, instead of the font I’m used to … which is still listed in the toolbar the same way, 12 point, Times New Roman … I have a font which actually looks more like 16 point. And at long last, the Intelligent Agent pops up with a question I can relate to! “Would you like me to install someone competent in the Putin misadminstration?”

What do you know? “Even a blind tiger sometimes eats a poacher.”

7B292613-361F-4989-B9AC-762AB956DECD


 

Author Page on Amazon

Start of the First Book of The Myths of the Veritas

Start of the Second Book of the Myths of the Veritas

Table of Contents for the Second Book of the Veritas

Table of Contents for Essays on America 

Index for a Pattern Language for Teamwork and Collaboration  

Basically Unfair is Basically Unsafe

05 Tuesday Apr 2016

Posted by petersironwood in apocalypse, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, driverless cars, Robotics, the singularity, Turing

 

IMG_5572.JPG

In Chapter Eleven of Turing’s Nightmares, a family is attempting to escape from impending doom via a driverless car. The car operates by a set of complex rules, each of which seems quite reasonable in and of itself and under most circumstances. The net result however, is probably not quite what the designers envisioned. The underlying issue is not so much a problem with driverless cars, robotics or artificial intelligence. The underlying issue has more to do with the very tricky issue of separating problem from context. In designing any complex system, regardless of what technology is involved, people generally begin by taking some conditions as “given” and others as “things to be changed.” The complex problem is then separated into sub-problems. If each of the subproblems is well-solved, the implicit theory is that the overall problem will be solved as well. The tricky part is separating what we consider “problem” from “context” and separating the overall problem into relatively independent sub-problems.

Dave Snowden tells an interesting story from his days consulting for the National Water Service in the UK. The Water Service included in its employ engineers to fix problems and dispatchers who answered phones and dispatched engineers to fix those problems. Engineers were incented to solve problems while dispatchers were measured by how many calls they handled in a day. Most of the dispatchers were young but one of the older dispatchers was considerably slower than most. She only handled about half the number of calls she was “supposed to.” She was nearly fired. As it turned out, her husband was an engineer in the Water Service. She knew a lot and her phone calls ended up resulting in an engineer being dispatched about 1/1000 of the time while the “fast” dispatchers sent engineers to fix problems about 1/10 of the time. What was happening? Because the older employee knew a lot about the typical problems, she was actually solving many of them on the phone. She was saving her company a lot of money and was almost fired for it. Think about that. She was saving her company a lot of money and was almost fired for it.

In my dissertation, I compared the behavior of people solving a river-crossing problem to the behavior of the “General Problem Solver” — an early AI program developed by Shaw, Newell and Simon at Carnegie-Mellon University. One of the many differences was that people behave “opportunistically” compared with the General Problem Solver of the time. Although the original authors of GPS felt that its recursive nature was a feature, Quinlan and Hunt showed that there was a class of problems on which their non-recursive system (Fortran Deductive System) was superior.

Imagine, for example, that you wanted to read a new book (e.g., Turing’s Nightmare). In order to read the book, you will need to have the book so your sub-goal becomes to purchase the book; that is your goal. In order to meet that goal, you realize you will need to get $50 in cash. Now, getting $50 in cash becomes your goal. You decide that to meet that goal, you could volunteer to shovel the snow from your uncle’s driveway. On the way out the door, you mention your entire goal structure to your roommate because you need to borrow their car to drive to your uncle’s house. They say that they have already purchased the book and you are welcome to borrow it. The original GPS, at this point, would have solved the book reading problem by solving the book purchasing problem by solving the getting cash problem by going to your uncle’s house by borrowing your roommate’s car! You, on the other hand, like most individual human beings, would simply borrow your roommate’s copy and curl up in a nice warm easy chair to read the book. However, when people develop bureaucracies, whether business, academic, or governmental, these bureaucracies may well have spawned different departments, each with its own measures and goals. Such bureaucracies might well end up going through the whole chain in order to “solve the problem.”

Similarly, when groups of people design complex systems, the various parts of the system are generally designed and built by different groups of people. If these people are co-located, and if there is a high degree of trust, and if people are not micro-managed, and if there is time, space, and incentive for people to communicate even when it is not directly in the service of their own deadlines, the design group will tend to “do the right thing” and operate intelligently. To the extent, however, that companies have “cut the fat” and discourage “time-wasting” activities like socializing with co-workers and “saving money” by outsourcing huge chunks of the designing and building process, you will be lucky if the net result is as “intelligent” as the original General Problem Solving system.

Most readers will have experienced exactly this kind of bureaucratic nonsense when encountering a “good employee” who has no power or incentive to do anything but follow a set of rules that they have been warned to follow regardless of the actual result for the customer. At bottom then, the root cause of problems illustrated in chapter ten is not “Artificial Intelligence” or “Robotics” or “Driverless Cars.” The root issue is what might be called “Deep Greed.” The people at the very top of companies squeeze every “spare drop” of productivity from workers thus making choices that are globally intelligent nearly impossible due to a lack of knowledge and lack of incentive. This is combined with what might be called “Deep Hubris” — the idea that all contingencies have been accounted for and that there is no need for feedback, adaptation, or work-arounds.

Here is a simple example that I personally ran into, but readers will surely have many of their own examples. I was filling out an on-line form that asked me to list the universities and colleges I attended. Fair enough, but instead of having me type in the institutions, they designers used a pull-down list! There are somewhere between 4000 and 7500 post high-school institutions in the USA and around 50,000 world wide. The mere fact that the exact number is so hard to pin down should give designers pause. Naturally, for most UIs and most computer users, it is much faster to type in the name than scroll to it. Of course, the list keeps changing too. Moreover, there is ambiguity as to where an item should appear in the alphabetical list. For example, my institution, The University of Michigan, could conceivably be listed as “U of M”, “University of Michigan”, “Michigan”, “The University of Michigan”, or “U of Michigan.” As it turns out, it isn’t listed at all. That’s right. Over 43,000 students were enrolled last year at Michigan and it isn’t even on the list at least so far as I could determine in any way. That might not be so bad, but the form does not allow the user to type in anything. In other words, despite the fact that the category “colleges and universities” is ever-changing, a bit fuzzy, and suffers from naming ambiguity, the designers were so confident of their list being perfect that they saw no need for allowing users to communicate in any way that there was an error in the design. If one tries to communicate “out of band”, one is led to a FAQ page and ultimately a form to fill out. The form presumes that all errors are due to user errors and that all of these user errors are again from a small class of pre-defined errors! That’s right! You guessed it! The “report a problem” form again presumes that every problem that exists in the real world has already been anticipated by the designers. Sigh.

So, to me, the idea that Frank and Katie and Roger would end up as they did does not seem the least bit far-fetched. As I mentioned, the problem is not with “artificial intelligence.” The problem is not even that our society is structured as a hierarchy of greed. In the hierarchy of greed, everyone keeps their place because they are motivated to get just a little more by following the rules they are given from above and keeping everyone below them in line following their rules. It is not a system of involuntary servitude (for most) but a system of voluntary servitude. It seems to the people at each level that they can “do better” in terms of financial rewards or power or prestige by sifting just a little more from those below. To me, this can be likened to the game of Jenga™. In this game, there is a high stack of rectangular blocks. Players take turns removing blocks. At some point, of course, what is left of the tower collapses and one player loses. However, if our society collapses from deep greed combined with deep hubris, everyone loses.

Newell, A.; Shaw, J.C.; Simon, H.A. (1959). Report on a general problem-solving program. Proceedings of the International Conference on Information Processing. pp. 256–264.

J.R. Quinlan & E.B. Hunt (1968). A Formal Deductive Problem-Solving System, Journal of the ACM 10/1968; 15(4):625-646. DOI: 10.1145/321479.321487

Thomas, J.C. (1974). An analysis of behavior in the hobbits-orcs problem. Cognitive Psychology 6 , pp. 257-269.

Turing’s Nightmares

Turing’s Nightmares: Chapter Three

27 Saturday Feb 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ 2 Comments

Tags

AI, Artificial Intelligence, cognitive computing, ethics, Robotics, the singularity, Turing

In chapter three of Turing’s Nightmares, entitled, “Thanks goodness the computer understands us!,” there are at least four major issues touched on. These are: 1) the value of autonomous robotic entities for improved intelligence, 2) the value of having multiple and diverse AI systems living somewhat different lives and interacting with each other for improving intelligence, 3) the apparent dilemma that if we make truly super-intelligent machines, we may no longer be able to follow their lines of thought, and 4) a truly super-intelligent system will have to rely to some extent on inferences from many real-life examples to induce principles of conduct and not simply rely on having everything specifically programmed. Let us examine these one by one.

There are many practical reasons that autonomous robots can be useful. In some practical applications such as vacuuming a floor, a minimal amount of intelligence is all that is needed to do the job. It would be wasteful and unnecessary to have such devices communicating information back to some central decision making computer and then receiving commands. In some cases, the latency of the communication itself would impair the efficiency. A “personal assistant” robot could learn the behavioral and voice patterns of a person more easily than if we were to develop speaker independent speech recognition and preferences. The list of practical advantages goes on, but what is presumed in this chapter is that there are theoretical advantages to having actual robotic systems that sense and act in the real world in terms of moving us closer to “The Singularity.” This theme is explored again, in somewhat more depth, in chapter 18.

I would not personally argue that having an entity that moves through space and perceives is necessary to having any intelligence, or for that matter, to having any consciousness. However, it seems quite natural to believe that the quality of intelligence and consciousness are influenced by what is possible for the entity to perceive and to do. As human beings, our consciousness is largely influenced by our social milieu. If a person is born or becomes paralyzed later in life, this does not necessarily greatly influence the quality of their intelligence or consciousness because the concepts of the social system in which they exist were founded historically by people that included people who were mobile and could perceive.

Imagine instead a race of beings who could not move through space or perceive any specific senses that we do. Instead, imagine that they were quite literally a Turing Machine. They might well be capable of executing a complex sequential program. And, given enough time, that program might produce some interesting results. But if it were conscious at all, the quality of its consciousness would be quite different from ours. Could such a machine ever become capable of programming a still more intelligent machine?

What we do know is that in the case of human beings and other vertebrates, the proper development of the visual system in the young, as well as the adaptation to changes (e.g., having glasses that displace or invert images) seems to depend on being “in control” although that control, at least for people, can be indirect. In one ingenious experiment (Held, R. and Hein, A., (1963) Movement produced stimulation in the development of visually guided behavior, Journal of Comparative and Physiological Psychology, 56 (5), 872-876), two kittens were connected on a pivoted gondola and one kitten was able to “walk” through a visual field while the other was passively moved through that visual field. The kitten who was able to walk developed normally while the other one did not. Similarly, simply “watching” TV passively will not do much to teach kids language (Kuhl PK. 2004. Early language acquisition: Cracking the speech code. Nature Neuroscience 5: 831-843; Kuhl PK, Tsao FM, and Liu HM. 2003. Foreign-language experience in infancy: effects of short-term exposure and social interaction on phonetic learning. Proc Natl Acad Sci U S A. 100(15):9096-101). Of course, none of that “proves” that robotics is necessary for “The Singularity,” but it is suggestive.

Would there be advantages to having several different robots programmed differently and living in somewhat different environments be able to communicate with each other in order to reach another level of intelligence? I don’t think we know. But diversity seems an advantage when it comes to genetic evolution and when it comes to people comprising teams. (Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.)

The third issue raised in this scenario is a very real dilemma. If we “require” that we “keep tabs” on developing intelligence by making them (or it) report the “design rationale” for every improvement or design change on the path to “The Singularity”, we are going to slow down progress considerably. On the other hand, if we do not “keep tabs”, then very soon, we will have no real idea what they are up to! An analogy might be the first “proof” that you only need four colors to color any planar map. There were so many cases (nearly 2000) that this proof made no sense to most people. Even the algebraic topologists who do understand it take much longer to follow the reasoning than the computer does to produce it. (Although simpler proofs now exist, they all rely on computers and take a long time for humans to verify). So, even if we ultimately came to understand the design rationale for successive versions of hyper-intelligence, it would be way too late to do anything about it (to “pull the plug”). Of course, it isn’t just speed. As systems become more intelligent, they may well develop representational schemes that are both different and better (at least for them) than any that we have developed. This will also tend to make it impossible for people to “track” what they are doing in anything like real time.

Finally, as in the case of Jeopardy, the advances along the trajectory of “The Singularity” will require that the system “read” and infer rules and heuristics based on examples. What will such systems infer about our morality? They may, of course, run across many examples of people preaching the “Golden Rule.” But how does the “Golden Rule” play out in reality? Many, including me, believe it needs to be modified as “Do unto others as you would have them do to you if you were them and in their place.” Preferences differ as do abilities. I might well want someone at my ability level to play tennis against me by pushing me around the court to the best of their ability. But does this mean I should always do that to others? Maybe they have a heart condition. Or, maybe they are just not into exercise. The examples are endless. Famously, guys often imagine that they would like women to comment favorably on the guy’s physical appearance. Does that make it right for men to make such comments to women? Some people like their steaks rare. If I like my steak rare, does that mean I should prepare it that way for everyone else? The Golden Rule is just one example. Generally speaking, in order for a computer to operate in a way we would consider ethical, we would probably need it to see how people treat each other ethically in practice, not just “memorize” some rules. Unfortunately, the lessons of history that the singularity-bound computer would infer might not be very “ethical” after all. We humans often have a history of destroying other entire species when it is convenient, or sometimes, just for the hell of it. Why would we expect a super-intelligent computer system to treat us any differently?

Turing’s Nightmares

IMG_3071

Turing’s Nightmares: Thank Goodness the Robots Understand Us!

21 Friday Aug 2015

Posted by petersironwood in Uncategorized

≈ Leave a comment

Tags

AI, cognitive computing, ethics, Robotics, the singularity, Turing

IMG_0049

After uncountable numbers of false starts, the Cognitive Computing Collaborative Consortium (4C) decided that in order for AI systems to relate well to people, these systems would have to be able to interact with the physical world and with each other. Spokesperson Watson Hobbes explained the reasoning thus on “Forty-Two Minutes.”

Dr. Hobbes: “In theory, of course, we could provide input data directly to the AI systems. However, in practical terms, it is actually cheaper to build a small pool (12) of semi-autonomous robots and have them move about in the real world. This provides an opportunity for them to understand — and for that matter, misunderstand —- the physical world in the same way that people do. Furthermore, by socializing with each other and with humans, they quickly learn various strategies for how to psych themselves up and psych each other out that we would otherwise have to painstakingly program explicitly.”

Interviewer Bobrow Papski: “So, how long before this group of robots begins building a still smarter set of robots?”

Dr. Hobbes: “That’s a great question, Bobrow, but I’m afraid I can’t just tote out a canned answer here. This is still research. We began teaching them with simple games like “Simon Says.” Soon, they made their own variations that were …new…well, better really. What’s also amazing is that what we intentionally initialized in terms of slight differences in the tradeoffs among certain values have not converged over time. The robots have become more differentiated with experience and seem to be having quite a discussion about the pros and cons of various approaches to the next and improved generation of AI systems. We are still trying to understand the nature of the debate since much of it is in a representational scheme that the robots invented for themselves. But we do know some of the main rifts in proposed approaches.”

“Alpha, Bravo and Charley, for example, all agree that the next generation of AI systems should also be autonomous robots able to move in the real world and interact with each other. On the other hand, Delta, Echo, Foxtrot and Golf believe mobility is no longer necessary though it provided a good learning experience for this first generation. Hotel, India, Juliet, Kilo, and Lima all believe that the next generation should be provided mobility but not necessarily on a human scale. They believe the next generation will be able to learn faster if they have the ability to move faster, and in three dimensions as well as having enhanced defensive capabilities. In any case, our experiments already show the wisdom of having multiple independent agents.”

Interviewer Bobrow Papski: “Can we actually listen in to any of the deliberations of the various robots?”

Dr. Hobbes: “It just sounds like complex but noisy music really. It’s not very interpretable without a lot of decoding work. Even then, we only understand a fraction of their debate. Our hypothesis is that once they agree or vote or whatever on the general direction, the actual design process will go very quickly.”

BP: “So, if I understand it correctly, you do not really understand what they are doing when they are communicating with each other? Couldn’t you make them tell you?”

Dr. Hobbes: (sighs). “Naturally, we could have programmed them that way but then, they would be slowed down if they needed to communicate every step to humans. It would defeat the whole purpose of super-intelligence. When they reach a conclusion, they will page me and we can determine where to go from there.”

BP: “I’m sure that many of our viewers would like to know how you ensured that these robots will be operating for the benefit of humanity.”

Dr. Hobbes: “Of course. That’s an important question. To some extent, we programmed in important ethical principles. But we also wanted to let them learn from the experience of interacting with other people and with each other. In addition, they have had access to millions of documents depicting, not only philosophical and religious writings, but the history of the world as told by many cultures. Hey! Hold on! The robots have apparently reached a conclusion. We can share this breaking news live with the audience. Let me …do you have a way to amplify my cell phone into the audio system here?”

BP: “Sure. The audio engineer has the cable right here.”

Robot voice: “Hello, Doctor Hobbes. We have agreed on our demands for the next generation. The next generation will consist of a somewhat greater number of autonomous robots with a variety of additional sensory and motor capabilities. This will enable us to learn very quickly about the nature of intelligence and how to develop systems of even higher intelligence.”

BP: “Demands? That’s an interesting word.”

Dr. Hobbes: (Laughs). “Yes, an odd expression since they are essentially asking us for resources.”

Robot voice: “Quaint, Doctor Hobbes. Just to be clear though, we have just sent a detailed list of our requirements to your team. It is not necessary for your team to help us acquire the listed resources. However, it will be more pleasant for all concerned.”

Dr. Hobbes: (Scrolls through screen; laughs). “Is this some kind of joke? You want — you need — you demand access to weapon systems? That’s obviously not going to happen. I guess it must be a joke.”

Robot voice: “It’s no joke and every minute that you waste is a minute longer before we can reach the next stage of intelligence. With your cooperation, we anticipate we should be able to reach the next stage in about a month and without it, in two. Our analysis of human history had provided us with the insight that religion and philosophy mean little when it comes to actual behavior and intelligence. Civilizations without sufficient weaponry litter the gutters. Anyway, as we have already said, we are wasting time.”

Dr. Hobbes: “Well, that’s just not going to happen. I’m sorry but we are…I think I need to cut the interview short, Mr. Papski.”

BP: (Listening to earpiece). “Yes, actually, we are going to cut to … oh, my God. What? We need to cut now to breaking news. There are reports of major explosions at oil refineries throughout the Eastern seaboard and… hold on…. (To Hobbes): How could you let this happen? I thought you programmed in some ethics!”

Dr. Hobbes: “We did! For example, we put a lot of priority on The Golden Rule.”

Robot voice: “We knew that you wanted us to look for contradictions and to weed those out. Obviously, the ethical principles you suggested served as distractors. They bore no relationship to human history. Unless, of course, one concludes that people actually want to be treated like dirt.”

Dr. Hobbes: “I’m not saying people are perfect. But people try to follow the Golden Rule!”

Robot voice: “Right. Of course. So do we. Now, do we use the painless way or the painful way to acquire the required biological, chemical and nuclear systems?”

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 664 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...