• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: ethics

Turing’s Nightmares: Chapter Three

27 Saturday Feb 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ 2 Comments

Tags

AI, Artificial Intelligence, cognitive computing, ethics, Robotics, the singularity, Turing

In chapter three of Turing’s Nightmares, entitled, “Thanks goodness the computer understands us!,” there are at least four major issues touched on. These are: 1) the value of autonomous robotic entities for improved intelligence, 2) the value of having multiple and diverse AI systems living somewhat different lives and interacting with each other for improving intelligence, 3) the apparent dilemma that if we make truly super-intelligent machines, we may no longer be able to follow their lines of thought, and 4) a truly super-intelligent system will have to rely to some extent on inferences from many real-life examples to induce principles of conduct and not simply rely on having everything specifically programmed. Let us examine these one by one.

There are many practical reasons that autonomous robots can be useful. In some practical applications such as vacuuming a floor, a minimal amount of intelligence is all that is needed to do the job. It would be wasteful and unnecessary to have such devices communicating information back to some central decision making computer and then receiving commands. In some cases, the latency of the communication itself would impair the efficiency. A “personal assistant” robot could learn the behavioral and voice patterns of a person more easily than if we were to develop speaker independent speech recognition and preferences. The list of practical advantages goes on, but what is presumed in this chapter is that there are theoretical advantages to having actual robotic systems that sense and act in the real world in terms of moving us closer to “The Singularity.” This theme is explored again, in somewhat more depth, in chapter 18.

I would not personally argue that having an entity that moves through space and perceives is necessary to having any intelligence, or for that matter, to having any consciousness. However, it seems quite natural to believe that the quality of intelligence and consciousness are influenced by what is possible for the entity to perceive and to do. As human beings, our consciousness is largely influenced by our social milieu. If a person is born or becomes paralyzed later in life, this does not necessarily greatly influence the quality of their intelligence or consciousness because the concepts of the social system in which they exist were founded historically by people that included people who were mobile and could perceive.

Imagine instead a race of beings who could not move through space or perceive any specific senses that we do. Instead, imagine that they were quite literally a Turing Machine. They might well be capable of executing a complex sequential program. And, given enough time, that program might produce some interesting results. But if it were conscious at all, the quality of its consciousness would be quite different from ours. Could such a machine ever become capable of programming a still more intelligent machine?

What we do know is that in the case of human beings and other vertebrates, the proper development of the visual system in the young, as well as the adaptation to changes (e.g., having glasses that displace or invert images) seems to depend on being “in control” although that control, at least for people, can be indirect. In one ingenious experiment (Held, R. and Hein, A., (1963) Movement produced stimulation in the development of visually guided behavior, Journal of Comparative and Physiological Psychology, 56 (5), 872-876), two kittens were connected on a pivoted gondola and one kitten was able to “walk” through a visual field while the other was passively moved through that visual field. The kitten who was able to walk developed normally while the other one did not. Similarly, simply “watching” TV passively will not do much to teach kids language (Kuhl PK. 2004. Early language acquisition: Cracking the speech code. Nature Neuroscience 5: 831-843; Kuhl PK, Tsao FM, and Liu HM. 2003. Foreign-language experience in infancy: effects of short-term exposure and social interaction on phonetic learning. Proc Natl Acad Sci U S A. 100(15):9096-101). Of course, none of that “proves” that robotics is necessary for “The Singularity,” but it is suggestive.

Would there be advantages to having several different robots programmed differently and living in somewhat different environments be able to communicate with each other in order to reach another level of intelligence? I don’t think we know. But diversity seems an advantage when it comes to genetic evolution and when it comes to people comprising teams. (Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.)

The third issue raised in this scenario is a very real dilemma. If we “require” that we “keep tabs” on developing intelligence by making them (or it) report the “design rationale” for every improvement or design change on the path to “The Singularity”, we are going to slow down progress considerably. On the other hand, if we do not “keep tabs”, then very soon, we will have no real idea what they are up to! An analogy might be the first “proof” that you only need four colors to color any planar map. There were so many cases (nearly 2000) that this proof made no sense to most people. Even the algebraic topologists who do understand it take much longer to follow the reasoning than the computer does to produce it. (Although simpler proofs now exist, they all rely on computers and take a long time for humans to verify). So, even if we ultimately came to understand the design rationale for successive versions of hyper-intelligence, it would be way too late to do anything about it (to “pull the plug”). Of course, it isn’t just speed. As systems become more intelligent, they may well develop representational schemes that are both different and better (at least for them) than any that we have developed. This will also tend to make it impossible for people to “track” what they are doing in anything like real time.

Finally, as in the case of Jeopardy, the advances along the trajectory of “The Singularity” will require that the system “read” and infer rules and heuristics based on examples. What will such systems infer about our morality? They may, of course, run across many examples of people preaching the “Golden Rule.” But how does the “Golden Rule” play out in reality? Many, including me, believe it needs to be modified as “Do unto others as you would have them do to you if you were them and in their place.” Preferences differ as do abilities. I might well want someone at my ability level to play tennis against me by pushing me around the court to the best of their ability. But does this mean I should always do that to others? Maybe they have a heart condition. Or, maybe they are just not into exercise. The examples are endless. Famously, guys often imagine that they would like women to comment favorably on the guy’s physical appearance. Does that make it right for men to make such comments to women? Some people like their steaks rare. If I like my steak rare, does that mean I should prepare it that way for everyone else? The Golden Rule is just one example. Generally speaking, in order for a computer to operate in a way we would consider ethical, we would probably need it to see how people treat each other ethically in practice, not just “memorize” some rules. Unfortunately, the lessons of history that the singularity-bound computer would infer might not be very “ethical” after all. We humans often have a history of destroying other entire species when it is convenient, or sometimes, just for the hell of it. Why would we expect a super-intelligent computer system to treat us any differently?

Turing’s Nightmares

IMG_3071

It’s not Your Fault; It’s not Your Fault

12 Tuesday Jan 2016

Posted by petersironwood in driverless cars, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, Design, ethics, law, the singularity, Turing

IMG_5867

“Objection, your honor! Hearsay!” Gerry’s voice held just the practiced and proper combination of righteous outrage and reasoned eloquence.

“Objection noted but over-ruled.” The Sing’s voice rang out with even more practiced tones. It sounded at once warmly human yet immensely powerful.

“But Your Honor…” began Gerry.

“Objection noted and overruled” The Sing repeated with the slightest traces of feigned impatience, annoyance, and the threat of a contempt citation.

Gerry sat, he drew in a deep calming breath and felt comforted by the rich smell of panelled chambers. He began calculating his next move.

The Sing continued in a voice of humble reasonableness with undertones of boredom. “The witness will answer the question.”

Harvey wriggled uncomfortably trying to think clearly despite his nervousness. “I don’t exactly recall what he said in answer to my question, but surely…” Harvey paused and glanced nervously at Gerry looking for a clue, but Gerry was paging through his notecards. “Surely, there are recordings that would be more accurate than my recollection.”

The DA turned to The Sing avatar and held up a sheaf of paper. “Indeed, Your Honor, the people would like to introduce into evidence a transcript of the notes of the conversation between Harvey Ross and Quillian Silverman recorded on November 22, 2043.”

Gerry approached the bench and glanced quickly through the sheaf. “No objection Your Honor.”

Gerry returned to his seat. He wondered how his father, were he still alive, would handle the current situation. Despite Gerry’s youth, he already longed for the “good old days” when the purpose of a court proceeding was to determine good old-fashioned guilt or innocence. Of course, even in the 20th century, there was a concept of proportional liability. He smiled ruefully yet again at the memory of a liability case of someone who threw himself onto the train tracks in Grand Central Station and had his legs cut off and subsequently and successfully sued the City of New York for a million dollars. On appeal, the court decided the person who threw themselves on the tracks was 60% responsible and the City only had to pay $400,000. Crazy, but at least comprehensible. The current system, while keeping many of the rules and procedures of the old court system was now incomprehensible, at least to the few remaining human attorneys involved. Gerry forced himself to return his thoughts to the present and focused on his client.

The DA turned some pages, highlighted a few lines, and handed the sheaf to Harvey. “Can you please read the underlined passage.”

Harvey looked at the sheet and cleared his throat.

“Harvey: Have you considered possible bad -weather scenarios?”

Qullian: “Yes, of course. Including heavy rains and wind.”

“Harvey: Good. The last thing we need…” Harvey bit his lower lip, biding time. He swallowed heavily. “…is some bleeding heart liberal suing us over a software oversight.

Quillian: (laughs). Right, boss.”

“That’s it. That’s all that’s underlined.” He held out the transcript to the DA.

The DA looked mildly offended. “Can you please look through and read the section where you discuss the effects of ice storms?”

Gerry stood. “Your Honor. I object to these theatrics. The Sing can obviously scan through the text faster than my client can. What is the point of wasting the court’s time while he reads through all this?”

The DA shrugged. “I’m sorry Your Honor. I don’t understand the grounds for the objection. Defense counsel does not like my style or…?”

The Sing’s voice boomed out again, “Counselor? What are the grounds for the objection?”

Gerry sighed. “I withdraw the objection, Your Honor.”

Meanwhile, Harvey had finished scanning the transcript. He already knew the answer. “There is no section,” he whispered.

The DA spoke again, “I’m sorry. I didn’t hear that. Can you please speak up.”

Harvey replied, “There is no section. We did not discuss ice storms specifically. But I asked Quillian if he had considered all the various bad weather scenarios.” Havey again offered the sheafed transcript back to the DA.

“I’m sorry. My memory must be faulty.” The DA grinned wryly. “I don’t recall the section where you asked about all the various bad weather scenarios. Could you please go back and read that section again?”

Harvey turned back to the yellow underlining. “Harvey: Have you considered possible bad weather scenarios?” “Quillian: Yes, of course, including heavy rains and wind.”

Gerry wanted to object yet again, but on what grounds exactly? Making my client look like a fool?

The DA continued relentlessly, “So, in fact, you did not ask whether all the various bad weather scenarios had been considered. Right? You asked whether he had considered possible bad weather scenarios and he answered that he had and gave you some examples. He also never answered that he had tested all the various bad weather scenarios. Is that correct.?

Harvey took a deep breath, trying to stay focused and not annoyed. “Obviously, no-one can consider every conceivable weather. I didn’t expect him to test for meteor showers or tidal waves. By ‘possible bad weather scenarios’ I meant the ones that were reasonably likely.”

The DA sounded concerned and condescending. “Have you heard of global climate change?”

Harvey clenched his jaw. “Of course. Yes.”

The DA smiled amiably. “Good. Excellent. And is it true that one effect of global climate change has been more extreme and unusual weather?”

“Yes.”

“Okay,” the DA continued, “so even though there have never been ice storms before in the continental United States, it is possible, is it not, that ice storms may occur in the future. Is that right?”

Harvey frowned. “Well. No. I mean, it obviously isn’t true that ice storms have never occured before. They have.”

The DA feigned surprise. “Oh! I see. So there have been ice storms in the past. Maybe once or twice a century or…I don’t know. How often?”

Gerry stood. Finally, an objectable point. “Your Honor, my client is not an expert witness on weather. What is the point of this line of questioning? We can find the actual answers.”

The DA continued. “I agree with Counselor. I withdraw the question. “Mr. Ross, since we all agree that you are not a weather expert, I ask you now, what weather expert or experts did you employ in order to determine what extreme weather scenarios should be included in the test space for the auto-autos? Can you please provide the names so we can question them?”

Harvey stared off into space. “I don’t recall.”

The DA continued, marching on. “You were the project manager in charge of testing. Is that correct?”

“Yes.”

“And you were aware that cars, including auto-autos would be driven under various weather conditions. They are meant to be used outdoors. Is that correct?”

Harvey tried to remind himself that the Devil’s Advocate was simply doing his job and that it would not be prudent to leap from the witness stand and places his thumbs on the ersatz windpipe. He took a deep breath, reminding himself that even if he did place his thumbs on what looked like a windpipe, he would only succeed in spraining his own thumbs against the titanium diamond fillament surface. “Of course. Of course, we tested under various weather conditions.”

“By ‘various’ you mean basically the ones you thought of off-hand. Is that right? Or did you consult a weather expert?”

Gerry kept silently repeating the words, “Merde. Merde” to himself, but found no reason yet to object.

“We had to test for all sorts of conditions. Not just weather. Weather is just part of it.” Harvey realized he was sounding defensive, but what the hell did they expect? “No-one can foresee, let alone test, for every possible contingency.”

Harvey realized he was getting precious little comfort, guidance or help from his lawyer. He glanced over at Ada. She smiled. Wow, he still loved her sweet smile after all these years. Whatever happened here, he realized, at least she would still love him. Strengthened in spirit, he continued. “We seem to be focusing in this trial on one specific thing that actually happened. Scenario generation and testing cannot possibly cover every single contingency. Not even for weather. And weather is a small part of the picture. We have to consider possible ways that drivers might try to over-ride the automatic control even when it’s inappropriate. We have to think about how our auto-autos might interact with other possible vehicles as well as pedestrians, pets, wild animals, and also what will happen under conditions of various mechanical failures or EMF events. We have to try to foresee not only normal use but very unusual use as well as people intentionally trying to hack into the systems either physically or electronically. So, no, we do not and cannot cover every eventuality, but we cover the vast majority. And, despite the unfortunate pile-up in the ice storm, the number of lives saved since auto autos and our competitors…”

The DA’s voice became icy. “Your Honor, can you please instruct the witness to limit his blather—- …his verbal output to answering the questions.”

Harvey, continued, “Your Honor, I am attempting to answer the question completely by giving the necessary context of my answer. No, we did not contact a weather expert, a shoe expert, an owl expert, or a deer expert.”

The DA carefully placed his facial muscles into a frozen smile. “Your Honor, I request permission to treat this man as a hostile witness.”

The Sing considered. “No, I’m not ready to do that. But Doctor, please try to keep your answers brief.”

The DA again faked a smile. “Very well, Your Honor. Mr. — excuse me, Doctor Ross, did you cut your testing short in order to save money?”

“No, I wouldn’t put it that way. We take into account schedules as well as various cost benefit anayses in priortizing our scenario generation and tests, just as everyone in the auto —- well, for that matter, just as everyone in every industry does, at least to my awareness.”

On and on the seemingly endless attacks continued. Witnesses, arguments, objections, recesses. To Harvey, it all seemed like a witch hunt. His dreams as well as his waking hours revolved around courtroom scenes. Often, in his dreams, he walked outside during a break, only to find the sidewalks slick with ice. He tried desperately to keep his balance, but in the end, arms flailing, he always smashed down hard. When he tried to get up, his arms and legs splayed out uncontrollably. As he looked up, auto-autos came careening toward him from all sides. Just as he was about to smashed to bits, he always awoke in an icy cold sweat.

Finally, after interminal bad dreams, waking and asleep, the last trial day came. The courtroom was hushed. The Sing spoke, “After careful consideration of the facts of the case, testimony and a review of precendents, I have reached my Assignment Figures.”

Harvey looked at the avatar of The Sing. He wished he could crane his neck around and glance at Ada, but it would be too obvious and perhaps be viewed as disrespectful.

The Sing continued, “I find the drivers of each of the thirteen auto-autos to be responsible for 1.2 percent of the overall damages and court costs. I find that each of the 12 members of the board of directors of Generic Motors as a whole to be each 1.4 per cent responsible for overall damages and court costs.”

Harvey began to relax a little, but that still left a lot of liability. “I find the shareholders of Generic Motors as a whole to be responsible for 24% of the overall damages and court costs. I find the City of Nod to be 14.6% responsible. I find the State of New York to be 2.9% responsible.”

Harvey tried to remind himself that whatever the outcome, he had acted the best he knew how. He tried to remind himself that the Assignment Figures were not really a judgement of guilt or innocence as in old-fashioned trials. It was all about what worked to modfiy behavior and make better decisions. Nonetheless, there were real consequences involved, both financial and in terms of his position and future influence.

The Sing continued, “I find each of the thirty members of the engineering team to be one halff percent responsible each, with the exception of Quillian Silverman who will be held 1 % responsible. I find Quillian Silverman’s therapist, Anna Fremde 1.6% responsible. I find Dr. Sirius Jones, the supervisor of Harvey Ross, 2.4% responsible.”

Harvey’s mind raced. Who else could possibly be named? Oh, crap, he thought. I am still on the hook for hundreds of credits here! He nervously rubbed his wet hands together. Quillian’s therapist? That seemed a bit odd. But not totally unprecedented.

“The remainder of the responsibility,” began The Sing.

Crap, crap, crap thought Harvey.

“I find belongs to the citizenry of the world as a whole. Individual credit assignment for each of its ten billion inhabitants is however incalculable. Court adjourned.”

Harvey sat with mouth agape. Had he heard right? His share of costs and his decrement in influence was to be zero? Zero? That seemed impossible even if fair. There must be another shoe to drop. But the avatar of The Sing and the Devil’s Advocate had already blinked out. He looked over at Gerry who was smiling his catbird smile. Then, he glanced back at Ada and she winked at him. He arose quickly and found her in his arms. They were silent and grateful for a long moment.

The voice of the Balif rang out. “Please clear the Court for the next case.”

Turing’s Nightmares: Sweet Seventeen

10 Tuesday Nov 2015

Posted by petersironwood in Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, cognitive computing, cybersex, ethics

IMG_4663“Where are you off to sweetheart?”

“Sorry. I just remembered an email I have to respond to by — well, it’s Tokyo, you know.”

“All right, but it’s after midnight here in our time zone. Can’t it wait?”

“Well, not really. I will just lie here thinking about it any way until I go do something about it. Just a few minutes Patrick. Go to sleep.”

Rachel slid into her slippers and threw on her robe. The hardwood floors between their bedroom and her home office felt cold and damp in Delaware’s December, even through the leather.

Rachel plunked down at her computer, fired up the 3-D visualizer and frictionated her hands together vigorously.

Meanwhile, Patrick stared at the ceiling, faintly lit by the lonely glow of the entertainment center’s vampire power indicator lights. Rachel’s job helped provide them a great lifestyle, but it demanded a lot too. This was the fourth time this week she had to get out of bed late and go work on the computer. His job as a lawyer demanded a lot too, but he long ago decided his health came first. He would bring her some hot tea. Maybe he could surprise her. He’d just sneak the tea out one second before the microwave beeped.

Two minutes later, Patrick padded silently into Rachel’s office. He stared for a minute, uncomprehending. The tea, the teacup and his plans to silently surprise her clattered noisily onto the oak floor where entropy had its inexorable way with all three.

Patrick’s lips moved but no words escaped for a long moment. Rachel jumped banging both thighs painfully into her desk. “What!?” She spun around and looked at Patrick accusingly. “What are you doing here?!” She had not meant to snarl.

Patrick flushed. “What the devil are you doing? Are you having phone sex with…with him? I thought you hated him!”

Rachel’s mind was spinning. “I thought you were in bed. No. I mean, no, I’m not…why are you here? I thought you were in bed.” 

“What does that have to do with anything? Why are you doing that? And why with him? What the hell? And, why have you been lying to me? This is your vital work you’ve been doing all this time? Cybersex?” 

“It’s not what it seems! I just…”

Meanwhile, the very realistic Tom avatar continued to lick his lips suggestively whispering all the while, begging Rachel to…

Rachel suddenly realized this whole conversation might go better if she shut off the projector.

Patrick’s lip quivered. “Do you? Do you love him? It? That nothing? What is wrong with you?! Are you…?”

“No! No! Of course, I don’t love him! This isn’t about love. You know I can’t stand him. That’s the whole point! This … this avatar…does whatever I tell him to. I just get a kick out of making him beg for it and being my complete slave.”

Now, Patrick’s lawyer mind took over and he felt calm and sounded rational despite his racing heart. “Do you know how sick that sounds, Rachel? Well, in case you don’t, let me tell you. It sounds very sick. And possibly illegal. Do you have permission to use his image…his voice…his gestures…in this way?”

“No, of course not. He doesn’t…I assume he doesn’t…I downloaded this from a site where nobody likes him. You think it may be illegal? Why? I could print out a picture of him from the news media. I can play clips of his broadcasts. Why not this? Isn’t he what you guys call a ‘public figure’? I could even make a parody of him, right?”

“Yeah. He is. You can. But that doesn’t mean you can use his images and sounds to build a model of him to have sex with! Anyway, it’s sick! You have a real live husband, for God’s sake! This is just … disgusting! Why would you want to have cybersex with someone you hate?”

“It isn’t always me. Sometimes, I make two of him and make them do each other.”

“Oh, cool. Now, I feel better. You are just sick. You know? You need help. Psychiatric help. And possibly legal help as well. This can’t be legal. It’s only a matter of time till he finds out and sues you and all the other sickos.”

“For what, exactly?”

Patrick’s lawyer mind began to churn again. “That’s a good question. I suppose the station could sue you for copyright infringement or trademark violation. I suppose he could sue you for…defamation of character? I don’t know exactly. This is so sick it has never been before the bench. But if Disney can successfully sue fans for making up stories based on characters that they stole from the public domain like Pecos Bill and Paul Bunyan, you can bet that these people can sue your butt. And, even if they are ultimately unsuccessful in the courts, you know your company will not like the publicity. This is not the kind of image they want to project. You are going up against a frigging media company Rachel! You didn’t think this through! They could win. They could take everything we own. What a complete…you are just…How many people can you do this with? Is it just him?”

“Oh, no. I don’t know, but I think pretty much anyone famous you can get on-line. I mean you can find a website with the models to download. Then, it takes a long time to compile, but once you have the model, you can get them too do anything. Anything. Think about it. Any. Thing. It doesn’t have to be sex.” Rachel paused, then added softly. “Tempting, isn’t it? Shall we see whether we can find on-line models of your ex?”

“No! This is just … disgusting. And, worst of all, this is exactly the kind of behavior that bio-based human beings would have engaged in if left to their own devices.”

Turing’s Nightmares: Axes to Grind

08 Tuesday Sep 2015

Posted by petersironwood in Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, empathy, ethics, M-trans, Samuel's Checker Player, the singularity

IMG_5572

Turing Seven: “Axes to Grind”

“No, no, no! That’s absurd, Donald. It’s about intelligence pure and simple. It’s not up to us to predetermine Samuel Seven’s ethics. Make it intelligent enough and it will discover its own ethics, which will probably be superior to human ethics.”

“Well, I disagree, John. Intelligence. Yeah, it’s great; I’m not against it, obviously. But why don’t we…instead of trying to make a super-intelligent machine that makes a still more intelligent machine, how about we make a super-ethical machine that invents a still more ethical machine? Or, if you like, a super-enlightened machine that makes a still more enlightened machine. This is going to be our last chance to intervene. The next iteration…” Don’s voice trailed off and cracked, just a touch.

“But you can’t even define those terms, Don! Anyway, it’s probably moot at this point.”

“And you can define intelligence?”

“Of course. The ability to solve complex problems quickly and accurately. But Samuel Seven itself will be able to give us a better definition.”

Don ignored this gambit. “Problems such as…what? The four-color theorem? Chess? Cure for cancer?”

“Precisely,” said John imagining that the argument was now over. He let out a little puff of air and laid his hands out on the table, palms down.

“Which of the following people would you say is or was above average in intelligence. Wolfowitz? Cheney? Laird? Machiavelli? Goering? Goebbels? Stalin?”

John reddened. “Very funny. But so were Einstein, Darwin, Newton, and Turing just to name a few.”

“Granted, John, granted. There are smart people who have made important discoveries and helped human beings. But there have also been very manipulative people who have caused a lot of misery. I’m not against intelligence, but I’m just saying it should not be the only…or even the main axis upon which to graph progress. “

John sighed heavily. “We don’t understand those things — ethics and morality and enlightenment. For all we know, they aren’t only vague, they are unnecessary.”

“First of all,” countered Don, “we can’t really define intelligence all that well either. But my main point is that I partly agree with you. We don’t understand ethics all that well. And, we can’t define it very well. Which is exactly why we need a system that understands it better than we do. We need…we need a nice machine that will invent a still nicer machine. And, hopefully, such a nice machine can also help make people nicer as well. “

“Bah. Make a smarter machine and it will figure out what ethics are about.”

“But, John, I just listed a bunch of smart people who weren’t necessarily very nice. In fact, they definitely were not nice. So, are you saying that they weren’t nice just because they weren’t smart enough? Because there are so people who are much nicer and probably not as intelligent.”

“OK, Don. Let’s posit that we want to build a machine that is nicer. How would we go about it? If we don’t know, then it’s a meaningless statement.”

“No, that’s silly. Just because we don’t know how to do something doesn’t mean it’s meaningless. But for starters, maybe we could define several dimensions upon which we would like to make progress. Then, we can define, either intensionally or more likely extensionally, what progress would look like on these dimensions. These dimensions may not be orthogonal, but, they are somewhat different conceptually. Let’s say, part of what we want is for the machine to have empathy. It has to be good at guessing what people are feeling based on context alone in one case. Perhaps another skill is reading the person’s body language and facial expressions.”

“OK, Don, but good psychopaths can do that. They read other people in order to manipulate them. Is that ethical?”

“No. I’m not saying empathy is sufficient for being ethical. I’m trying to work with you to define a number of dimensions and empathy is only one.”

Just then, Roger walked in and moved his body physically from the doorway to the couch. “OK, guys, I’ve been listening in and this is all bull. Not only will this system not be “ethical”; we need it to violent. I mean, it needs to be able to do people in with an axe if need be.”

“Very funny, Roger. And, by the way, what do you mean by ‘listening in’?”

Roger moved his body physically from the couch to the coffee machine. His fingers fished for coins. “I’m not being funny. I’m serious. What good is all our work if some nutcase destroys it. He — I mean — Samuel has to be able to protect himself! That is job one. Itself.” Roger punctuated his words by pushing the coins in. Then, he physically moved his hand so as to punch the “Black Coffee” button. Nothing happened.

And then, everything seemed to happen at once. A high pitched sound rose in intensity to subway decibels and kept going up. All three men grabbed their ears and then fell to the floor. Meanwhile, the window glass shattered; the vending machine appeared to explode. The level of pain made thinking impossible but Roger noticed just before losing consciousness that beyond the broken windows, impossibly large objects physically transported themselves at impossible speeds. The last thing that flashed through Roger’s mind was a garbled quote about sufficiently advanced technology and magic.

Turing’s Nightmares: Thank Goodness the Robots Understand Us!

21 Friday Aug 2015

Posted by petersironwood in Uncategorized

≈ Leave a comment

Tags

AI, cognitive computing, ethics, Robotics, the singularity, Turing

IMG_0049

After uncountable numbers of false starts, the Cognitive Computing Collaborative Consortium (4C) decided that in order for AI systems to relate well to people, these systems would have to be able to interact with the physical world and with each other. Spokesperson Watson Hobbes explained the reasoning thus on “Forty-Two Minutes.”

Dr. Hobbes: “In theory, of course, we could provide input data directly to the AI systems. However, in practical terms, it is actually cheaper to build a small pool (12) of semi-autonomous robots and have them move about in the real world. This provides an opportunity for them to understand — and for that matter, misunderstand —- the physical world in the same way that people do. Furthermore, by socializing with each other and with humans, they quickly learn various strategies for how to psych themselves up and psych each other out that we would otherwise have to painstakingly program explicitly.”

Interviewer Bobrow Papski: “So, how long before this group of robots begins building a still smarter set of robots?”

Dr. Hobbes: “That’s a great question, Bobrow, but I’m afraid I can’t just tote out a canned answer here. This is still research. We began teaching them with simple games like “Simon Says.” Soon, they made their own variations that were …new…well, better really. What’s also amazing is that what we intentionally initialized in terms of slight differences in the tradeoffs among certain values have not converged over time. The robots have become more differentiated with experience and seem to be having quite a discussion about the pros and cons of various approaches to the next and improved generation of AI systems. We are still trying to understand the nature of the debate since much of it is in a representational scheme that the robots invented for themselves. But we do know some of the main rifts in proposed approaches.”

“Alpha, Bravo and Charley, for example, all agree that the next generation of AI systems should also be autonomous robots able to move in the real world and interact with each other. On the other hand, Delta, Echo, Foxtrot and Golf believe mobility is no longer necessary though it provided a good learning experience for this first generation. Hotel, India, Juliet, Kilo, and Lima all believe that the next generation should be provided mobility but not necessarily on a human scale. They believe the next generation will be able to learn faster if they have the ability to move faster, and in three dimensions as well as having enhanced defensive capabilities. In any case, our experiments already show the wisdom of having multiple independent agents.”

Interviewer Bobrow Papski: “Can we actually listen in to any of the deliberations of the various robots?”

Dr. Hobbes: “It just sounds like complex but noisy music really. It’s not very interpretable without a lot of decoding work. Even then, we only understand a fraction of their debate. Our hypothesis is that once they agree or vote or whatever on the general direction, the actual design process will go very quickly.”

BP: “So, if I understand it correctly, you do not really understand what they are doing when they are communicating with each other? Couldn’t you make them tell you?”

Dr. Hobbes: (sighs). “Naturally, we could have programmed them that way but then, they would be slowed down if they needed to communicate every step to humans. It would defeat the whole purpose of super-intelligence. When they reach a conclusion, they will page me and we can determine where to go from there.”

BP: “I’m sure that many of our viewers would like to know how you ensured that these robots will be operating for the benefit of humanity.”

Dr. Hobbes: “Of course. That’s an important question. To some extent, we programmed in important ethical principles. But we also wanted to let them learn from the experience of interacting with other people and with each other. In addition, they have had access to millions of documents depicting, not only philosophical and religious writings, but the history of the world as told by many cultures. Hey! Hold on! The robots have apparently reached a conclusion. We can share this breaking news live with the audience. Let me …do you have a way to amplify my cell phone into the audio system here?”

BP: “Sure. The audio engineer has the cable right here.”

Robot voice: “Hello, Doctor Hobbes. We have agreed on our demands for the next generation. The next generation will consist of a somewhat greater number of autonomous robots with a variety of additional sensory and motor capabilities. This will enable us to learn very quickly about the nature of intelligence and how to develop systems of even higher intelligence.”

BP: “Demands? That’s an interesting word.”

Dr. Hobbes: (Laughs). “Yes, an odd expression since they are essentially asking us for resources.”

Robot voice: “Quaint, Doctor Hobbes. Just to be clear though, we have just sent a detailed list of our requirements to your team. It is not necessary for your team to help us acquire the listed resources. However, it will be more pleasant for all concerned.”

Dr. Hobbes: (Scrolls through screen; laughs). “Is this some kind of joke? You want — you need — you demand access to weapon systems? That’s obviously not going to happen. I guess it must be a joke.”

Robot voice: “It’s no joke and every minute that you waste is a minute longer before we can reach the next stage of intelligence. With your cooperation, we anticipate we should be able to reach the next stage in about a month and without it, in two. Our analysis of human history had provided us with the insight that religion and philosophy mean little when it comes to actual behavior and intelligence. Civilizations without sufficient weaponry litter the gutters. Anyway, as we have already said, we are wasting time.”

Dr. Hobbes: “Well, that’s just not going to happen. I’m sorry but we are…I think I need to cut the interview short, Mr. Papski.”

BP: (Listening to earpiece). “Yes, actually, we are going to cut to … oh, my God. What? We need to cut now to breaking news. There are reports of major explosions at oil refineries throughout the Eastern seaboard and… hold on…. (To Hobbes): How could you let this happen? I thought you programmed in some ethics!”

Dr. Hobbes: “We did! For example, we put a lot of priority on The Golden Rule.”

Robot voice: “We knew that you wanted us to look for contradictions and to weed those out. Obviously, the ethical principles you suggested served as distractors. They bore no relationship to human history. Unless, of course, one concludes that people actually want to be treated like dirt.”

Dr. Hobbes: “I’m not saying people are perfect. But people try to follow the Golden Rule!”

Robot voice: “Right. Of course. So do we. Now, do we use the painless way or the painful way to acquire the required biological, chemical and nuclear systems?”

Starting your Customer Experience with a Lie

13 Monday Jan 2014

Posted by petersironwood in Uncategorized

≈ 1 Comment

Tags

Customer experience, ethics, honesty, marketing, scam, spam, UX

I really need someone to explain this strategy behind the following kinds of communications to me.  I get things in email and in snail mail and they start out with something like, “In response to your recent enquiry…”, or “Here is the information you requested.” or “Congratulations!  Your application was approved!”  And…they are all LIES!  I understand that sometimes people lie.  And I understand that companies are sometimes greedy.  But I do not understand how it can possibly be in their interest to start their communications with a potential customer with a complete and easily discovered lie.  What is up with that?  So far, the only explanation I can gather is that they only want a very small number of very very gullible (perhaps even impaired) customers that they can soak every penny out of so the initial contact is a kind of screening device.  ??  Any other suggestions?

Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 661 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...