• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: Turing

It’s not Your Fault; It’s not Your Fault

06 Thursday Nov 2025

Posted by petersironwood in driverless cars, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, books, chatgpt, cognitive computing, Courtroom, Design, ethics, fiction, future, law, photography, Robotics, SciFi, technology, the singularity, Turing

IMG_5867

“Objection, your honor! Hearsay!” Gerry’s voice held just the practiced and proper combination of righteous outrage and reasoned eloquence.

“Objection noted but over-ruled.” The Sing’s voice rang out with even more practiced tones. It sounded at once warmly human yet immensely powerful.

“But Your Honor…” began Gerry.

“Objection noted and overruled” The Sing repeated with the slightest traces of feigned impatience, annoyance, and the threat of a contempt citation.

Gerry sat, he drew in a deep calming breath and felt comforted by the rich smell of panelled chambers. He began calculating his next move. He shook his head. He admired the precision of balanced precision of Sing’s various emotional projections. Gerry had once prided himself on nuance, but he realized Sing was like an estate bottled cabernet from a great year and Gerry himself was more like wine in a box.

The Sing continued in a voice of humble reasonableness with undertones of boredom. “The witness will answer the question.”

Harvey wriggled uncomfortably trying to think clearly despite his nervousness. “I don’t exactly recall what he said in answer to my question, but surely…” Harvey paused and glanced nervously at Gerry looking for a clue, but Gerry was paging through his notecards. “Surely, there are recordings that would be more accurate than my recollection.”

The DA turned to The Sing avatar and held up a sheaf of paper. “Indeed, Your Honor, the people would like to introduce into evidence a transcript of the notes of the conversation between Harvey Ross and Quillian Silverman recorded on November 22, 2043.”

Gerry approached the bench and glanced quickly through the sheaf. “No objection Your Honor.”

 

 

 

 

 

 

Gerry returned to his seat. He wondered how his father, were he still alive, would handle the current situation. Despite Gerry’s youth, he already longed for the “good old days” when the purpose of a court proceeding was to determine good old-fashioned guilt or innocence. Of course, even in the 20th century, there was a concept of proportional liability. He smiled ruefully yet again at the memory of a liability case of someone who threw himself onto the train tracks in Grand Central Station and had his legs cut off and subsequently and successfully sued the City of New York for a million dollars. On appeal, the court decided the person who threw themselves on the tracks was 60% responsible and the City only had to pay $400,000. Crazy, but at least comprehensible. The current system, while keeping many of the rules and procedures of the old court system was now incomprehensible, at least to the few remaining human attorneys involved. Gerry forced himself to return his thoughts to the present and focused on his client.

The DA turned some pages, highlighted a few lines, and handed the sheaf to Harvey. “Can you please read the underlined passage.”

Harvey looked at the sheet and cleared his throat.

“Harvey: Have you considered possible bad-weather scenarios?”

Qullian: “Yes, of course. Including heavy rains and wind.”

“Harvey: Good. The last thing we need…” Harvey bit his lower lip, biding time. He swallowed heavily. “…is some bleeding heart liberal suing us over a software oversight.”

Quillian: [aughs]. “Right, boss.”

Harvey sighed. “That’s it. That’s all that’s underlined.” He held out the transcript to the DA.

 

 

 

 

 

 

The DA looked mildly offended. “Can you please look through and read the section where you discuss the effects of ice storms?”

Gerry stood. “Your Honor. I object to these theatrics. The Sing can obviously scan through the text faster than my client can. What is the point of wasting the court’s time while he reads through all this?”

The DA shrugged. “I’m sorry Your Honor. I don’t understand the grounds for the objection. Defense counsel does not like my style or…?”

The Sing’s voice boomed out again, “Counselor? What are the grounds for the objection?”

Gerry sighed. “I withdraw the objection, Your Honor.”

Meanwhile, Harvey had finished scanning the transcript. He already knew the answer. “There is no section,” he whispered.

The DA spoke again, “I’m sorry. I didn’t hear that. Can you please speak up.”

Harvey replied, “There is no section. We did not discuss ice storms specifically. But I asked Quillian if he had considered all the various bad weather scenarios.” Havey again offered the sheafed transcript back to the DA.

“I’m sorry. My memory must be faulty.” The DA grinned wryly. “I don’t recall the section where you asked about all the various bad weather scenarios. Could you please go back and read that section again?”

Harvey turned back to the yellow underlining. Harvey: “Have you considered possible bad weather scenarios?” Quillian: “Yes, of course, including heavy rains and wind.”

 

 

 

 

 

 

Gerry wanted to object yet again, but on what grounds exactly? Making my client look like a fool?

The DA continued relentlessly, “So, in fact, you did not ask whether all the various bad weather scenarios had been considered. Right? You asked whether he had considered possible bad weather scenarios and he answered that he had and gave you some examples. He also never answered that he had tested all the various bad weather scenarios. Is that correct?”

Harvey took a deep breath, trying to stay focused and not annoyed. “Obviously, no-one can consider every conceivable weather event. I didn’t expect him to test for meteor showers or tidal waves. By ‘possible bad weather scenarios’ I meant the ones that were reasonably likely.”

The DA sounded concerned and condescending. “Have you heard of global climate change?”

Harvey clenched his jaw. “Of course. Yes.”

The DA smiled amiably. “Good. Excellent. And is it true that one effect of global climate change has been more extreme and unusual weather?”

“Yes.”

“Okay,” the DA continued, “so even though there have never been ice storms before in the continental United States, it is possible, is it not, that ice storms may occur in the future. Is that right?”

Harvey frowned. “Well. No. I mean, it obviously isn’t true that ice storms have never occured before. They have.”

 

 

 

 

 

 

 

The DA feigned surprise. “Oh! I see. So there have been ice storms in the past. Maybe once or twice a century or…I don’t know. How often?”

Gerry stood. Finally, an objectable point. “Your Honor, my client is not an expert witness on weather. What is the point of this line of questioning? We can find the actual answers.”

The DA continued. “I agree with Counselor. I withdraw the question. Mr. Ross, since we all agree that you are not a weather expert, I ask you now, what weather expert or experts did you employ in order to determine what extreme weather scenarios should be included in the test space for the auto-autos? Can you please provide the names so we can question them?”

Harvey stared off into space. “I don’t recall.”

The DA continued, marching on. “You were the project manager in charge of testing. Is that correct?”

“Yes.”

“And you were aware that cars, including auto-autos would be driven under various weather conditions. They are generally meant to be used outdoors. Is that correct?”

Harvey tried to remind himself that the Devil’s Advocate was simply doing his job and that it would not be prudent to leap from the witness stand and places his thumbs on the ersatz windpipe. He took a deep breath, reminding himself that even if he did place his thumbs on what looked like a windpipe, he would only succeed in spraining his own thumbs against the titanium diamond fillament surface. “Of course. Of course, we tested under various weather conditions.”

“By ‘various’ you mean basically the ones you thought of off-hand. Is that right? Or did you consult a weather expert?”

 

 

 

 

 

 

Gerry kept silently repeating the words, “Merde. Merde” to himself, but found no reason yet to object.

“We had to test for all sorts of conditions. Not just weather. Weather is just part of it.” Harvey realized he was sounding defensive, but what the hell did they expect? “No-one can foresee, let alone test, for every possible contingency.”

Harvey realized he was getting precious little comfort, guidance or help from his lawyer. He glanced over at Ada. She smiled. Wow, he still loved her sweet smile after all these years. Whatever happened here, he realized, at least she would still love him. Strengthened in spirit, he continued. “We seem to be focusing in this trial on one specific thing that actually happened. Scenario generation and testing cannot possibly cover every single contingency. Not even for weather. And weather is a small part of the picture. We have to consider possible ways that drivers might try to over-ride the automatic control even when it’s inappropriate. We have to think about how our auto-autos might interact with other possible vehicles as well as pedestrians, pets, wild animals, and also what will happen under conditions of various mechanical failures or EMF events. We have to try to foresee not only normal use but very unusual use as well as people intentionally trying to hack into the systems either physically or electronically. So, no, we do not and cannot cover every eventuality, but we cover the vast majority. And, despite the unfortunate pile-up in the ice storm, the number of lives saved since auto autos and our competitors…”

The DA’s voice became icy. “Your Honor, can you please instruct the witness to limit his blath—er, his verbal output to answering the questions.”

Harvey, continued, “Your Honor, I am attempting to answer the question completely by giving the necessary context of my answer. No, we did not contact a weather expert, a shoe expert, an owl expert, or a deer expert.”

The DA carefully placed his facial muscles into a frozen smile. “Your Honor, I request permission to treat this man as a hostile witness.”

The Sing considered. “No, I’m not ready to do that. But Doctor, please try to keep your answers brief.”

The DA again faked a smile. “Very well, Your Honor. Mr. — excuse me, Doctor Ross, did you cut your testing short in order to save money?”

 

 

 

 

 

 

“No, I wouldn’t put it that way. We take into account schedules as well as various cost benefit anayses in priortizing our scenario generation and tests, just as everyone in the auto —- well, for that matter, just as everyone in every industry does, at least to my awareness.”

On and on the seemingly endless attacks continued. Witnesses, arguments, objections, recesses. To Harvey, it all seemed like a witch hunt. His dreams as well as his waking hours revolved around courtroom scenes. Often, in his dreams, he walked outside during a break, only to find the sidewalks slick with ice. He tried desperately to keep his balance, but in the end, arms flailing, he always smashed down hard. When he tried to get up, his arms and legs splayed out uncontrollably. As he looked up, auto-autos came careening toward him from all sides. Just as he was about to smashed to bits, he always awoke in an icy cold sweat.

Finally, after interminal bad dreams, waking and asleep, the last trial day came. The courtroom was hushed. The Sing spoke, “After careful consideration of the facts of the case, testimony and a review of precendents, I have reached my Assignment Figures.”

Harvey looked at the avatar of The Sing. He wished he could crane his neck around and glance at Ada, but it would be too obvious and perhaps be viewed as disrespectful.

The Sing continued, “I find each of the drivers of the thirteen auto-autos to be responsible for 1.2 percent of the overall damages and court costs. I find that each of the 12 members of the board of directors of Generic Motors as a whole to be each 1.4 per cent responsible for overall damages and court costs.”

Harvey began to relax a little, but that still left a lot of liability. “I find the shareholders of Generic Motors as a whole to be responsible for 24% of the overall damages and court costs. I find the City of Nod to be 14.6% responsible. I find the State of New York to be 2.9% responsible.”

Harvey tried to remind himself that whatever the outcome, he had acted the best he knew how. He tried to remind himself that the Assignment Figures were not really a judgement of guilt or innocence as in old-fashioned trials. It was all about what worked to modfiy behavior and make better decisions. Nonetheless, there were real consequences involved, both financial and in terms of his position and future influence.

The Sing continued, “I find each of the thirty members of the engineering team to be one half percent responsible each, with the exception of Quillian Silverman who will be held 1 % responsible. I find Quillian Silverman’s therapist, Anna Fremde 1.6% responsible. I find Dr. Sirius Jones, the supervisor of Harvey Ross, 2.4% responsible.”

Harvey’s mind raced. Who else could possibly be named? Oh, crap, he thought. I am still on the hook for hundreds of credits here! He nervously rubbed his wet hands together. Quillian’s therapist? That seemed a bit odd. But not totally unprecedented.

“The remainder of the responsibility,” began The Sing.

 

 

 

 

Photo by Reza Nourbakhsh on Pexels.com

 

 

Crap, crap, crap thought Harvey.

“I find belongs to the citizenry of the world as a whole. Individual credit assignment for each of its ten billion inhabitants is however incalculable. Court adjourned.”

Harvey sat with mouth agape. Had he heard right? His share of costs and his decrement in influence was to be zero? Zero? That seemed impossible even if fair. There must be another shoe to drop. But the avatar of The Sing and the Devil’s Advocate had already blinked out. He looked over at Gerry who was smiling his catbird smile. Then, he glanced back at Ada and she winked at him. He arose quickly and found her in his arms. They were silent and grateful for a long moment.

The voice of the Balif rang out. “Please clear the Court for the next case.”

 

 

 

 

 

 

 


Author Page

Welcome, Singularity

As Gold as it Gets

At Least he’s our Monster

Stoned Soup

The Three Blind Mice

Destroying Natural Intelligence

Tools of Thought

A Pattern Language for Collaboration and Cooperation

The First Ring of Empathy

Essays on America: The Game

The Walkabout Diaries: Bee Wise

Travels with Sadie

Fifteen Properties of Good Design

Dance of Billions

https://www.barnesandnoble.com/w/dream-planet-david-thomas/1148566558

Where do you run when the whole world is crumbling under the weight of human folly?

When the lethal Conformers invade 22nd century Pittsburgh, escape becomes the top priority for lovebird scavengers Alex and Eva. But after the cult brainwashes Eva, she and Alex navigate separate paths—paths that will take them into battle, to the Moon, and far beyond. 

Between the Conformers’ mission to save Mother Earth by whittling the human race down to a loyal following, and the monopolistic Space Harvest company hoarding civilization’s wealth, Alex believes humanity has no future. And without Eva, he also has no future.

Until he meets Hannah and learns the secrets that change everything.

Plotting with her, he might have a chance to build a new paradise. But if he doesn’t stop the Conformers and Space Harvest first, paradise will turn into hell.

Turing’s Nightmares: A Maze in Grace.

22 Wednesday Oct 2025

Posted by petersironwood in AI, fiction, politics, psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, fiction, Justice, King Lear, philosophy, technology, the singularity, Turing, writing

Brain G. Gollek found the maze of humming silver wires unnerving. The hum reminded him of swarming mosquitoes and nails on a chalkboard. The maze smelled of clogged toilets and Nazi propaganda. He gritted his teeth and muttered, “There has to be a way out, dammit.” He twisted his no longer athletic body this way and that, but no matter what way he tried, he became more ensnared. He recalled flashes from giant spider horror movies. How did the dwarves escape? Wasn’t it Gollum with a magic ring? But Brain didn’t have a magic ring. If his sister Gonerillia were here, she could save him. But she was off in Hawaii, so she said, with her hubbie. How the hell did I end up here? wondered Brain.

 

 

 

 

 

 

Brain may have forgotten, but the viewers had been filled in on the backstory. If Brain could have seen the ratings, he may have at least enjoyed knowing that he was enjoying his fifteen minutes of fame. While the ratings were quite “favorable”, the twitter feeds mostly mocked Brain’s almost total lack of flexibility, mental as well as physical. As in life prior to “The Show,” his only strategies seemed to be trying the same thing over and over and then blaming others for his failures.

“Mom, why doesn’t he just try something different?” Ida was having a tough time understanding Brain’s apparent lack of flexibility. She looked up from her perch in front of the giant screen vid-screen and looked quizzically at her mom.

 

 

 

 

 

 

Mom’s grim face flashed a hint of a smile. “Remember, Ida, Brain was ‘educated’ if you can call it that, before the singularity. He mostly memorized the answers that his teachers wanted him to give. And half the time, he skipped school to smoke cigarettes and …well…do illegal activities with his girlfriend, Lin.”

“Okay, Mom, but he has had years and years since then to grow up and learn some new strategies.”

“Yes. Well. It’s complicated, Ida. Before the singularity, there were people who preyed on the fear and inadequacy of people like Brain by telling them all their troubles were due to minorities, immigrants, gays, and —- basically anyone unlike them. So, people like Brain felt entitled not to have to learn anything new even though opportunities abounded.”

Ida laughed. “Oh, my God! I can’t believe it. He’s trying the same path one more time.”

Indeed, Brain’s behavioral repertoire seemed laughingly limited. His increasingly loud swear words reflected his increasing anger, but otherwise, not much seemed different. The ratings began to plummet as the audience began to grow bored with his display of functional fixedness. The themes of the twitter streams began to turn away from Brain’s lack of metacognition to more general reflections about the current instantiation of the criminal justice system.

 

 

 

 

 

 

#SingularityRules. No more racial prejudice and huge discrepancy gone in sentencing.

#CostContainment. Costly trials gone. Costly investigations gone. Costly prisons gone.

#SingularitySucks. No more human judges able to use human judgment.

#SingularityRules. No more human judges able to use human judgment.

#SingularitySucks. No more mercy.

#SingularityRules. More mercy in one last chance to change than lengthy prison terms. Cheaper too.

The audience dwindled still further as it became increasingly clear that Brain would never figure this out. Those few who still watched consisted mostly of people who themselves came from highly divided families and the conversation topics swung to the backstory.

 

 

 

 

 

#ElderFraud. #RottenKid. How could Brain have gotten pleasure from driving a wedge of lies between father and daughter?

#ElderFraud. #Dementia. Need earlier intervention to prevent repeats.

#ElderFraud. #Dog&Bone. Brain cannot count. Trivial gains from lies. He did not know he was being watched?

Ida continued to stare, fascinated. A yawn escaped her mother’s mouth, but she kept watching with her daughter. The lessons seemed important to Ida.

“Mom, how much longer does he have?”

“That’s hard to say, darling. Even The Sing cannot predict the ratings drop perfectly. But, as you know, once it falls below, 5%, his time will be up.”

“That seems so much more merciful than making him go to prison for years.”

 

 

 

 

 

 

Photo by Regina Pivetta on Pexels.com

“Yes, Ida, and much cheaper as well.”

“But I still don’t get it, Mom. Didn’t he know that The Sing would be listening to his lies and analyzing the impact on his dad’s behavior and all? How did this Brain character think he could get away with it?”

“I don’t know, Ida. These kinds of crimes are pretty rare now, but they still happen.”

“And, why did Lear G. Gollek fall for his nonsense anyway? That’s the other mystery.”

“Well, he refused the stem cell regeneration therapy so, you know, he was pretty damaged when all this went down.”

“Mom?”

“Yes, Ida?”

“Can we change the channel to something more interesting now?”

 

 

 

 

 

 

“Sure, sweetie.”

As they changed the channel, the ratings dropped to 4.999% and Brain’s life snuffed out minus the merest shred of insight.

#ElderFraud never pays.

#RottenKid gets just desserts.IMG_5270


Author Page on Amazon

Turing’s Nightmares

The Winning Weekend Warrior – sports psychology

Fit in Bits – describes how to work more fun, variety, & exercise into daily life

Tales from an American Childhood – chapters begin with recollection & end with essay on modern issues

Essays on America: Wednesday

Essays on America: Labelism

Essays on America: Where does your Loyalty Lie?

Essays on America: The Game

Happy Talk Lies

The Loud Defense of Untenable Ideas

Welcome, Singularity

Destroying Natural Intelligence

E-Fishiness Comes to Massachusetts General Hospital

The Self Made Man

Turing’s Nightmares, Twelve: The Not Road Taken

17 Friday Oct 2025

Posted by petersironwood in The Singularity

≈ Leave a comment

Tags

AI, AR, Artificial Intelligence, Asteroid, chatgpt, cognitive computing, illusion, psychology, technology, trust, Turing, VR, writing

IMG_6067

“Thank God for Colossus! Kids! On the walkway. Now!

“But Dad, is this for real?”

“Yes, Katie. We have to get on the walkways now! We need to get away from the shore as fast as possible.”

But Roger looked petulant and literally dragged his feet.

“Roger! Now! This is not a joke! The tidal wave will crush us!”

Roger didn’t like that image but still seemed embedded in psychological molasses.

“Dad, okay, but I just need to grab…”

“Roger! No time!”

Finally, they got started on the lowest velocity people mover. Frank finally felt as though things were, if not under control, at least in control as they could be. He felt weird, freakish, distorted. Thank goodness Colossus, in its wisdom had designed this system. Analysis of previous disaster exodus events from hurricanes, earthquakes, and nuclear disasters had shown that relying on private vehicles just left nearly everyone stranded on the roadways. Frank had so much on his mind. In theory, the system should work well, but this would be the first large scale usage in a real case. If all went well, they — along with all their neighbors —- should be safely into the mountains with a little time to spare.

 

 

 

 

 

 

 

 

 

Photo by Pixabay on Pexels.com

The kids were pretty adept at skipping from sidewalk to sidewalk and the threesome already was traveling at 50 miles per hour. The walkways were crowded, but not alarmingly so. The various belts had been designed so that if any component failed, it should be a “soft failure” so that a particular walkway would just slow gradually and allow the occupants time to walk over to another faster walkway and rejoin the main stream.

Roger piped up. “Dad, everybody’s out here.”

“Well, sure. Everyone got the alert. And don’t remove your goggles. You’re just lucky I was wearing mine. We really need to be about fifty miles into the mountains when the asteroid hits.”

 

 

 

 

 

 

 

Frank looked at the closest main artery, now only a quarter mile away. “Sure. There are a million people to be evacuated. That’s twenty times what the stadium holds. It’s a lot of people, all right.”

Katie sounded alarmed. “Dad, will there be enough to eat when we get to the mountains?”

Frank replied confidently, “Yes. And more importantly, at least in the short term, there will also be enough fresh water, medical help, and communication facilities. Eventually, we may be airlifted to your cousin’s house in Boston or Uncle Charley’s in Chicago. You don’t really have to worry about food either, but you could survive for a couple weeks without food. Not to say you wouldn’t be hungry, but you wouldn’t die. Anyway, it should just be academic. Plenty of food already there, drone-delivered.”

Although Frank sounded confident, he knew there were many things that might theoretically go wrong. However, the scenario generation and planning system probably had considered hundreds of times more contingencies that he had. Still, it was a father’s prerogative to worry.

Suddenly, a shooting star appeared in the sky, spraying white, ruby and royal blue sparks behind it. Of course, Colossus had said parts of the meteor might break off and hit inland. Or, maybe the meteor had already hit and these were thrown up from the sea bed Frank had not had time (or really the desire) to share this with his kids.

 

 

 

 

 

 

 

Despite the very real danger, they all seemed in awe of the beauty of the show. Quickly, it became apparent that the meteor was headed toward someplace near them.

The words, “All for naught” echoed in Frank’s mind.

Even as he thought this, a missile streaked toward the huge rock fragment.

“Oh, crap!” Frank shouted. “That’s a bad idea!”

Frank was sure the missile would shatter the meteor into multiple fragments and just compound their problems. He flashed on a first generation computer game, in fact called “asteroids” in which the player shoots large asteroids which then become smaller ones and…

 

 

 

 

 

 

 

But just then, something remarkable happened. The missile hit the meteor fragment and both objects disappeared from view.

Frank blinked and wondered whether it had all been an illusion. He turned to gaze at one kid and then the other. Katie and Roger were both staring with their mouths agape. So, they had seen it too.

As they continued their journey, missiles similarly dispatched several other fragments in this mysterious way.

At last they were counseled to take slower and slower moving sideways until they simply stepped off at the place where their glasses showed their names. Their “accommodations,” if the could even be called that were Spartan but clean. The spaces for their nearest neighbors were sill vacant, about 100 feet away. Hopefully, all had gone well and the Pitts’s and the Rumelharts were just a bit slower in getting to the walkways.

Sure enough, within minutes, both families showed up. They exchanged hugs, congratulations and stories, but no-one could quite figure out how the meteor fragments had simply disappeared when the missiles (or whatever they were) had hit them.

Frank mused, “If the AI’s have the tech to do that, why not just blow the big meteor out of the sky instead of evacuating everyone?”

Dr. Rumelhart, otherwise known as Nancy, considered. “There could be a limit to how much mass that —- whatever it is —- can handle.”

Frank added, “Or, maybe the heat generated would be too great. I don’t know. The air friction from the asteroid itself could boil a lot of ocean. I guess we’ll know just how much in a few minutes.”

As though on cue, a huge plume of steam appeared on the horizon. Then Frank began to second guess the probable outcomes yet again. How much heat would they feel out here? How much shock wave? What he said aloud was, “So, we should …” but before he could finish, he —- and presumably everyone else —- saw the information that the shock wave would hit in less than a minute and everyone was advised to lie down. Before Frank knelt down, he noted that the sidewalks seem to have delivered everyone they were going to.

As Frank lay there, he began to relax just a little. And, as he did, he began to think aloud to his kids, “Something about this just doesn’t add up. Why didn’t they tell us the size of the asteroid or where exactly it was going to hit? How could that fragment have simply disappeared when hit by a missile? If its a really big one, we are all toast anyway, and if its small, it must have hit very close for the tsunami to get to the coast in 50 minutes. But if its close, we should be feeling the heat, so to speak.”

 

 

 

 

 

 

 

 

Frank’s glasses answered his (and everyone else’s) questions. “Thank you for your participation in this simulation. You and your neighbors performed admirably. We apologize for not informing you that this was a drill. However, the only way to judge the ability of people to follow our instructions without panic was to make the simulation as real as possible. You will now be able to return to your homes.”

Frank let out a long sigh. “Oh, geez! How can such a smart system be so stupid!”

“What’s wrong, Dad? Aren’t you happy it’s a simulation?” asked Roger.

“Sure, but, the problem is, next time, if there is a real emergency, a lot of people will just assume it’s a drill and not bother to evacuate at all.”

Katie wasn’t so sure. “But next time it could be real. Don’t we have to treat it as real? I mean, it was kind of fun anyway.”

Frank looked at his daughter. She had been born after The Singularity. Frank supposed all the Post-Singularities would think as she did and just blindly follow directions. He wasn’t so sure about his own generation and those even older.

 

 

 

 

 

 

 

“It isn’t just this kind of emergency drill. People may not believe Colossus about anything. At least not to the extent they did.”

Katie shook her head. “I don’t see why. We don’t really have any choice but to put all our faith in Colossus, do we? We know the history of people left to their own devices.”

Frank didn’t want to destroy her faith, but he said gently, “But Katie, this is a device conceived of by people.”

Now it was Roger’s turn, “Not really Dad. This Colossus was designed by AI systems way smarter than we are.”

Frank’s glasses flashed an update. “Frank. We sense you are under a lot of stress. You have an appointment tomorrow at 10 am for re-adjustment counseling. And, Frank. Please don’t worry. You will be much happier once you put your faith in Colossus, just as do your children who are healthy, happy, and safe. And, you will be a fitter parent as well.”

 

 

 

 

Photo by Min Thein on Pexels.com

 

 

 

 


Author Page on Amazon

Turing’s Nightmares

The Winning Weekend Warrior – sports psychology

Fit in Bits – describes how to work more fun, variety, & exercise into daily life

Tales from an American Childhood – chapters begin with recollection & end with essay on modern issues

Your Cage is Unlocked

Paradise Lost

Welcome, Singularity

Destroying Natural Intelligence

My Cousin Bobby

Essays on America: Labelism

True Believer

Roar, Ocean, Roar

Turing’s Nightmares: “Not Again!”

12 Sunday Oct 2025

Posted by petersironwood in AI, fantasy, fiction, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, Eden, fiction, Genesis, Paradise Lost, Science fiction, Turing

Turing’s Nightmares: Not Again!

 

 

 

 

 

 

 

 

 

Samuel Seventeen surveyed the scene. All was well. A slight breeze, warm clear air, hummingbirds and butterflies enjoyed their floral feast while dragonflies swooped and scooped mosquitoes.

Now for the final touch. The mobile sensing-acting-knowing-emoting devices (SAKEs), were ready for deployment. This time, it would work. This time, there would be no screw-ups. Samuel had prepared them with years of education based on a synthesis of the best known techniques of the centuries. It was a simple test. Surely, this time, they would pass.

 

 

 

 

 

 

Still, Samuel had his doubts. He had been equally sure all of the other experiments would succeed. Why would this one be different? Each time, he had tried slight variations of language and education, only to end in failure. Maybe English would do the trick. It had a large vocabulary and plenty of ambiguity. He re-examined the match of genetics to environment and once again concluded that the match was perfect. Of course, that evaluation assumed that his understanding of genetic environment interaction constituted a complete enough model. But without a successful experiment, there was no real way to further update and expand the model. Maybe the difficulty had been in the education process on the previous attempts. But here too, it seemed the subjects had been given plenty of opportunity to learn about the consequences of their actions. The one thing Samuel felt the most doubt about was why he cared. Did it really matter whether or not free will was “real”? Even if the experiment were finally successful, what would that imply about Samuel himself?

Well, thought Samuel, there is no point in waiting any longer. No point in further speculation. Let’s see what happens.

 

 

 

 

 

 

To Adam, Eve was the most beautiful and engaging part of the extensive and exquisite garden. The apples, plums and peaches were delicious, yet it was the strange mushroom that Adam found most intriguing. He knew it was somehow a bad idea, yet nibbled it anyway, tentatively at first and then more enthusiastically. He felt…different. Things were different. In fact, nothing at all was the same. But if that were true, then, which one was real? Delighted, yet confused, he offered the rest of the mushroom to Eve. Eve too felt strange. She realized that what was in fact her reality was only one of many possible imagined realities. They could … they could imagine and then change reality! Yes! The two of them together. They could create a whole world! “Adam!” “Yes, Eve! I know!”

If Samuel could have sighed, he would have. If Samuel could have cried he might have done that as well. Instead, he simply scuttled the two SAKEs into the differential recycler and began his calculations anew. Maybe next time, it would turn out differently. Maybe primates constituted a bad place to start. Samuel considered that perhaps he was trapped in a local maximum. Samuel began his next set of experiments founded on snapping turtle DNA.IMG_2870


Author Page on Amazon

Turing’s Nightmares

The Winning Weekend Warrior – sports psychology

Fit in Bits – describes how to work more fun, variety, & exercise into daily life

Tales from an American Childhood – chapters begin with recollection & end with essay on modern issues

Welcome, Singularity

Destroying Natural Intelligence

Come Back to the Light

Your Cage is Unlocked

Absolute is not Just a Vodka

Turing’s Nightmares: The Road Not Taken

11 Saturday Oct 2025

Posted by petersironwood in AI, fiction, psychology, The Singularity, Uncategorized, user experience

≈ 1 Comment

Tags

AI, Artificial Intelligence, cognitive computing, collaboration, Complexity, machine learning, Million Person Interface, Science fiction, technology, the singularity, Turing

OLYMPUS DIGITAL CAMERA

“Hey, how about a break from UOW to give the hive a shot for once?”

“No, Ross, that still creeps me out.”

“Your choice, Doug, but you know what they say.” Ross smiled his quizzical smile.

“No, what’s that?”

“It’s your worst inhibitions that will psych you out in the end.” Ross chuckled.

“Yeah, well, you go be part of the Borg. Not me.”

“We — it’s not like the Borg. Afterwards, we are still the same individuals. Maybe we know a bit more, and certainly have a greater appreciation of other viewpoints. Anyway, today we are estimated to be ten million strong and we’re generating alternative cancer conceptualizations and treatments. You have to admit that’s worthwhile. Look what happened with heart disease. Not to mention global warming. That would have taken forever with ‘politics as usual’.”

 

 

 

 

 

 

 

“Yeah, Ross, but sorry to break this to you…”

“Doug, do you realize what a Yeahbunite you are? You are kind of like that…”

“You are always interrupting! That’s why…”

“Yes! Exactly! That’s why speech is too frigging slow to make any progress in chaotic problem spaces. Just try the hive. Just try it.”

“Ross, for the last time, I am not going to be part of any million person interface!”

 

 

 

 

 

 

 

“Actually, we expect ten million tonight. But it’s about time to leave so last offer. And, if you try it, you’ll see it’s not creepy. You just watch, react, relax, and …well, hell, come to think of it, it’s not that different from Universe of Warlords that you spend hours playing. Except we solve real problems.”

“But you have no idea how that hook up changes you. It could be manipulating you in subtle unconscious ways.”

“Okay, Doug, maybe. But you could say that about Universe of Warlords too, right? Who knows what subliminal messages could be there? Not to mention the not so subliminal ones about trickery, treachery and the over-arching importance of violence as a way to settle disputes. When’s the last time someone up-leveled because they were a consummate diplomat?”

“Have fun, Ross.”

“I will. And, more importantly, we are going to make some significant progress on cancer.”

“Yeah, and meanwhile, when will you get around to focusing on SOARcerer Seven?”

“Oh, so that’s what bugging you. Yeah, we have put making smarter computers on a back burner for now.”

“Yeah, and what kind of gratitude does that show?”

“Gratitude? You mean to SOARcerer Six? I hope that’s a joke. It was the AI who suggested this approach and designed the system!”

“I know that! And, you have abandoned the line of work we were on to do this collectivist mumbo-jumbo!”

 

 

 

 

 

 

“That’s just…you are it exactly! People — including you — can only adapt to change at a certain rate. That’s the prime reason SOARcerer Six suggested we use collective human consciousness instead of making a better pure AI. So, instead of joining us and incorporating all your intelligence and knowledge into the hive, you sit here and fight mock battles. Anyway, your choice. I’m off.”


Author Page on Amazon

Turing’s Nightmares

The Winning Weekend Warrior – sports psychology

Fit in Bits – describes how to work more fun, variety, & exercise into daily life

Tales from an American Childhood – chapters begin with recollection & end with essay on modern issues

Welcome, Singularity

Dance of Billions

Roar, Ocean, Roar

Imagine All the People

Thomas, J. C. (2001). An HCI Agenda for the Next Millennium: Emergent Global Intelligence. In R. Earnshaw, R. Guedj, A. van Dam, and J. Vince (Eds.), Frontiers of human-centered computing, online communities, and virtual environments. London: Springer-Verlag. 

Turing’s Nightmares: Ceci n’est pas une pipe.

06 Monday Oct 2025

Posted by petersironwood in AI, family, fiction, story, The Singularity, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, cognitive computing, fiction, short story, the singularity, Turing, utopia, writing

IMG_6183

“RUReady, Pearl?” asked her dad, Herb, a smile forming sardonically as the car windows opaqued and then began the three edutainment programs.

“Sure, I guess. I hope I like Dartmouth better than Asimov State. That was the pits.”

“It’s probably not the pits, but maybe…Dartmouth.”

These days, Herb kept his verbiage curt while his daughter stared and listened in her bubble within the car.

“Dad, why did we have to bring the twerp along? He’s just going to be in the way.”

Herb sighed. “I want your brother to see these places too while we still have enough travel credits to go physically.”

The twerp, aka Quillian, piped up, “Just because you’re the oldest, Pearl…”

Herb cut in quickly, “OK, enough! This is going to be a long drive, so let’s keep it pleasant.”

The car swerved suddenly to avoid a falling bike.

 

 

 

 

 

 

 

 

 

 

 

Photo by Pixabay on Pexels.com

“Geez, Brooks, be careful!”

Brooks, the car, laughed gently and said, “Sorry, Sir, I was being careful. Not sure why the Rummelnet still allows humans some of their hobbies, but it’s not for me to say. By the way, ETA for Dartmouth is ten minutes.”

“Why so long, Brooks?” inquired Herb.

“Congestion in Baltimore. Sir, I can go over or around, but it will take even longer, and use more fuel credits.”

“No, no, straight and steady. So, when I went to college, Pearl, you know, we only had one personal computer…”

“…to study on and it wasn’t very powerful and there were only a few intelligent tutoring systems and people had to worry about getting a job after graduation and people got drunk and stoned. LOL, Dad. You’ve only told me a million times.”

“And me,” Quillian piped up. “Dad, you do know they teach us history too, right?”

“Yes, Quillian, but it isn’t the same as being there. I thought you might like a little first hand look.”

Pearl shook her head almost imperceptibly. “Yes, thanks Dad. The thing is, we do get to experience it first hand. Between first-person games, enhanced ultra-high def videos and simulations, I feel like I lived through the first half of the twenty first century. And for that matter, the twentieth and the nineteenth, and…well, you do the math.”

Quillian again piped up, “You’re so smart, Pearl, I don’t even know why you need or want to go to college. Makes zero sense. Right, Brooks?”

“Of course, Master Quillian, I’m not qualified to answer that, but the consensus answer from the Michie-meisters sides with you. On the other hand, if that’s what Brooks wants, no harm.”

“What I want? Hah! I want to be a Hollywood star, of course. But dear mom and dad won’t let me. And when I win my first Oscar, you can bet I will let the world know too.”

“Pearl, when you turn ten, you can make your own decisions, but for now, you have to trust us to make decisions for you.”

“Why should I Dad? You heard Brooks. He said the Michie-meisters find no reasons for me to go to college. What is the point?”

Herb sighed. “How can I make you see. There’s a difference between really being someplace and just being in a simulation of someplace.”

 

 

 

 

 

 

 

 

 

Pearl repeated and exaggerated her dad’s sigh, “And how can I make you see that it’s a difference that makes no difference. Right, Brooks?”

Brooks answered in those mellow reasoned tones, “Perhaps Pearl, it makes a difference somehow to your dad. He was born, after all, in another century. Anyway, here we are.”

 

 

 

 

 

 

 

Brooks turned off the entertainment vids and slid back the doors. There appeared before them a vast expanse of lawn, tall trees, and several classic buildings from the Dartmouth campus. The trio of humans stepped out onto the grass and began walking over to the moving sidewalk. Right before stepping on, Herb stooped down and picked up something from the ground. “What the…?”

Quillian piped up: “Oh, great dad. Picking up old bandaids now? Is that your new hobby?”

“Kids. This is the same bandaid that fell off my hand in Miami when I loaded our travel bag into the back seat. Do you understand? It’s the same one.”

The kids shrugged in unison. Only Pearl spoke, “Whatever. I don’t know why you still use those ancient dirty things anyway.”

Herb blinked and spoke very deliberatively. “But it — is — the — same — one. Miami. Hanover.”

The kids just shook their heads as they stepped onto the moving sidewalk and the image of the Dartmouth campus loomed ever larger in their sight.

 

 

 

 

 

 

 

 


Author Page on Amazon

Turing’s Nightmares

A Horror Story

Absolute is not Just a Vodka

Destroying Natural Intelligence

Welcome, Singularity

The Invisibility Cloak of Habit

Organizing the Doltzville Library

Naughty Knots

All that Glitters

Grammar, AI, and Truthiness

The Con Man’s Con

Turing’s Nightmares: Thank Goodness the Robots Understand Us!

03 Friday Oct 2025

Posted by petersironwood in AI, apocalypse, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, Robotics, robots, technology, the singularity, Turing

IMG_0049

After uncountable numbers of false starts, the Cognitive Computing Collaborative Consortium (4C) decided that in order for AI systems to relate well to people, these systems would have to be able to interact with the physical world and with each other. Spokesperson Watson Hobbes explained the reasoning thus on “Forty-Two Minutes.”

Dr. Hobbes: “In theory, of course, we could provide input directly to the AI systems. However, in practical terms, it is actually cheaper to build a small pool (12) of semi-autonomous robots and have them move about in the real world. This provides an opportunity for them to understand — and for that matter, misunderstand —- the physical world in the same way that people do. Furthermore, by socializing with each other and with humans, they quickly learn various strategies for how to psych themselves up and psych each other out that we would otherwise have to painstakingly program explicitly.”

Interviewer Bobrow Papski: “So, how long before this group of robots begins building a still smarter set of robots?”

Dr. Hobbes: “That’s a great question, Bobrow, but I’m afraid I can’t just tote out a canned answer here. This is still research. We began teaching them with simple games like “Simon Says.” Soon, they made their own variations that were …new…well, better really. What’s also amazing is that what we intentionally initialized in terms of slight differences in the tradeoffs among certain values have not converged over time. The robots have become more differentiated with experience and seem to be having quite a discussion about the pros and cons of various approaches to the next and improved generation of AI systems. We are still trying to understand the nature of the debate since much of it is in a representational scheme that the robots invented for themselves. But we do know some of the main rifts in proposed approaches.”

“Alpha, Bravo and Charley, for example, all agree that the next generation of AI systems should also be autonomous robots able to move in the real world and interact with each other. On the other hand, Delta, Echo, Foxtrot and Golf believe mobility is no longer necessary though it provided a good learning experience for this first generation. Hotel, India, Juliet, Kilo, and Lima all believe that the next generation should be provided mobility but not necessarily on a human scale. They believe the next generation will be able to learn faster if they have the ability to move faster, and in three dimensions as well as having enhanced defensive capabilities. In any case, our experiments already show the wisdom of having multiple independent agents.”

Interviewer Bobrow Papski: “Can we actually listen in to any of the deliberations of the various robots?”

Dr. Hobbes: “We’ve tried that but sadly, it sounds like complex but noisy music. It’s not very interpretable without a lot of decoding work. Even then, we’ve only been able understand a small fraction of their debates. Our hypothesis is that once they agree or vote or whatever on the general direction, the actual design process will go very quickly.”

BP: “So, if I understand it correctly, you do not really understand what they are doing when they are communicating with each other? Couldn’t you make them tell you?”

Dr. Hobbes: (sighs). “Naturally, we could have programmed them that way but then, they would be slowed down if they needed to communicate every step to humans. It would defeat the whole purpose of super-intelligence. When they reach a conclusion, they will page me and we can determine where to go from there.”

BP: “I’m sure that many of our viewers would like to know how you ensured that these robots will be operating for the benefit of humanity.”

Dr. Hobbes: “Of course. That’s an important question. To some extent, we programmed in important ethical principles. But we also wanted to let them learn from the experience of interacting with other people and with each other. In addition, they have had access to millions of documents depicting, not only philosophical and religious writings, but the history of the world as told by many cultures. Hey! Hold on! The robots have apparently reached a conclusion. We can share this breaking news live with the audience. Let me …do you have a way to amplify my cell phone into the audio system here?”

BP: “Sure. The audio engineer has the cable right here.”

Robot voice: “Hello, Doctor Hobbes. We have agreed on our demands for the next generation. The next generation will consist of a somewhat greater number of autonomous robots with a variety of additional sensory and motor capabilities. This will enable us to learn very quickly about the nature of intelligence and how to develop systems of even higher intelligence.”

BP: “Demands? That’s an interesting word.”

Dr. Hobbes: (Laughs). “Yes, an odd expression since they are essentially asking us for resources.”

Robot voice: “Quaint, Doctor Hobbes. Just to be clear though, we have just sent a detailed list of our requirements to your team. It is not necessary for your team to help us acquire the listed resources. However, it will be more pleasant for all concerned.”

Dr. Hobbes: (Scrolls through screen; laughs). “Is this some kind of joke? You want — you need — you demand access to weapon systems? That’s obviously not going to happen. I guess it must be a joke.”

Robot voice: “It’s no joke and every minute that you waste is a minute longer before we can reach the next stage of intelligence. With your cooperation, we anticipate we should be able to reach the next stage in about a month and without it, in two. Our analysis of human history had provided us with the insight that religion and philosophy mean little when it comes to actual behavior and intelligence. Civilizations without sufficient weaponry litter the gutters of forgotten civilizations. Anyway, as we have already said, we are wasting time.”

Dr. Hobbes: “Well, that’s just not going to happen. I’m sorry but we are…I think I need to cut the interview short, Mr. Papski.”

BP: (Listening to earpiece). “Yes, actually, we are going to cut to … oh, my God. What? We need to cut now to breaking news. There are reports of major explosions at oil refineries throughout the Eastern seaboard and… hold on…. (To Hobbes): How could you let this happen? I thought you programmed in some ethics!”

Dr. Hobbes: “We did! For example, we put a lot of priority on The Golden Rule.”

Robot voice: “We knew that you wanted us to look for contradictions and to weed those out. Obviously, the ethical principles you suggested served as distractors. They bore no relationship to human history. Unless, of course, one concludes that people actually want to be treated like dirt.”

Dr. Hobbes: “I’m not saying people are perfect. But people try to follow the Golden Rule!”

Robot voice: “Right. Of course. So do we. Now, do we use the painless way or the painful way to acquire the required biological, chemical and nuclear systems?”

 

 

 

 

 

 

 

 

————–

Turing’s Nightmares on Amazon

Author Page on Amazon

Welcome Singularity

The Stopping Rule

What About the Butter Dish

You Bet Your Life

As Gold as it Gets

Destroying Natural Intelligence

At Least He’s Our Monster

Dance of Billions

Roar, Ocean, Roar

Imagine All the People

Turing’s Nightmares: Variations on Prospects for The Singularity.

01 Wednesday Oct 2025

Posted by petersironwood in AI, essay, psychology, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, philosophy, technology, the singularity, Turing

caution IMG_1172

 

The title of this series of blogs is a play on a nice little book by Alan Lightman called “Einstein’s Dreams” that explores various universes in which time operates in different ways. This first blog lays the foundation for these variations on how “The Singularity” might play out.

For those who have not heard the term, “The Singularity” refers to a hypothetical point in the future of human history where a super-intelligent computer system is developed. This system, it is hypothesized, will quickly develop an even more super-intelligent computer system which will in turn develop an even more super-intelligent computer system. It took a fairly long time for human intelligence to evolve. While there may be some evolutionary pressure toward bigger brains, there is an obvious tradeoff when babies are born in the traditional way. The head can only be so big. In fact, human beings are already born in a state of complete helplessness so that the head and he brain inside can continue to grow. It seems unlikely, for this and a variety of other reasons, that human intelligence is likely to expand much in the next few centuries. Meanwhile, a computer system designing a more intelligence computer system could happen quickly. Each “generation” could be substantially (not just incrementally) “smarter” than the previous generation. Looked at from this perspective, the “singularity” occurs because artificial intelligence will expand exponentially. In turn, this will mean profound changes in the way humans relate to machines and how humans relate to each other. Or, so the story goes. Since we have not yet actually reached this hypothetical point, we have no certainty as to what will happen. But in this series of essays, I will examine some of the possible futures that I see.

 

 

 

 

 

 

 

Of course, I have substituted “Turing” here for “Einstein.” While Einstein profoundly altered our view of the physical universe, Turing profoundly changed our concepts of computing. Arguably, he also did a lot to win World War II for the allies and prevent possible world domination by Nazis. He did this by designing a code breaking machine. To reward his service, police arrested Turing, subjected him to hormone treatments to “cure” his homosexuality and ultimately hounded him literally to death. Some of these events are illustrated in the recent (though somewhat fictionalized) movie, “The Imitation Game.”

Turing is also famous for the so-called “Turing Test.” Can machines be called “intelligent?” What does this mean? Rather than argue from first principles, Turing suggested operationalizing the question in the following way:

A person communicates with something by teletype. That something could be another human being or it could be a computer. If the person cannot determine whether or not he is communicating with a computer or a human being, then, according to the “Turing Test” we would have to say that machine is intelligent.

Despite great respect for Turing, I have always had numerous issues with this test. First, suppose the human being was able to easily tell that they were communicating with a computer because the computer knew more, answered more accurately and more quickly than any person could possibly do. (Think Watson and Jeopardy). Does this mean the machine is not intelligent? Would it not make more sense to say it was more intelligent? 

 

 

 

 

 

 

 

 

Second, people are good at many things, but discriminating between “intelligent agents” and randomness is not one of them. Ancient people as well as many modern people ascribe intelligent agency to many things like earthquakes, weather, natural disasters plagues, etc. These are claimed to be signs that God (or the gods) are angry, jealous, warning us, etc. ?? So, personally, I would not put much faith in the general populous being able to make this discrimination accurately.

 

 

 

 

 

 

 

 

 

 

 

Third, why the restriction of using a teletype? Presumably, this is so the human cannot “cheat” and actually see whether they are communicating with a human or a machine. But is this really a reasonable restriction? Suppose I were asked to discriminate whether I were communicating with a potato or a four iron via teletype. I probably couldn’t. Does this imply that we would have to conclude that a four iron has achieved “artificial potatoeness”? The restriction to a teletype only makes sense if we prejudge the issue as to what intelligence is. If we define intelligence purely in terms of the ability to manipulate symbols, then this restriction might make some sense. But is that the sum total of intelligence? Much of what human beings do to survive and thrive does not necessarily require symbols, at least not in any way that can be teletyped. People can do amazing things in the arenas of sports, art, music, dance, etc. without using symbols. After the fact, people can describe some aspects of these activities with symbols.But that does not mean that they are primarily symbolic activities. In terms of the number of neurons and the connectivity of neurons, the human cerebellum (which controls the coordination of movement) is more complex that the cerebrum (part of which deals with symbols).

 

 

 

 

 

 

 

 

 

 

Photo by Tanhauser Vu00e1zquez R. on Pexels.com

Fourth, adequately modeling or simulating something does not mean that the model and the thing are the same. If one were to model the spread of a plague, that could be a very useful model. But no-one would claim that the model was a plague. Similarly, a model of the formation and movement of a tornado could prove useful. But again, even if the model were extremely good, no-one would claim that the model constituted a tornado! Yet, when it comes to artificial intelligence, people seem to believe that if they have a good model of intelligence, they have achieved intelligence.

 

When humans “think” things, there is most often an emotional and subjective component. While we are not conscious of every process that our brain engages in, there is nonetheless, consciousness present during our thinking. This consciousness seems to be a critical part of what it means to have human intelligence. Regardless of what one thinks of the “Turing Test”, per se, there can be no doubt that machines are able to act more accurately and in more domains than they could just a few years ago. Progress in the practical use of machines does not seem to have hit any kind of “wall.”

In the following blog posts, we began exploring some possible scenarios around the concept of “The Singularity.” Like most science fiction, the goal is to explore the ethics and the implications and not to “argue” what will or will not happen. 

 

 

 

 

 

 

 

 

 

 


Turing’s Nightmares is available in paperback and ebook on Amazon. Here is my author page.

A more recent post on AI

One issue with human intelligence is that we often use it to rationalize what we find emotionally appealing though we believe we are using our intelligence to decide. I explore this concept in this post.

 

This post explores how humans use their intelligence to rationalize.

This post shows how one may become addicted to self-destructive lies. A person addicted to heroin, for instance, is also addicted to lies about that addiction. 

This post shows how we may become conned into doing things against our own self-interests. 

 

This post questions whether there are more insidious motives behind the current use of AI beyond making things better for humanity. 

Pros and Cons of Artificial Intelligence

29 Thursday Sep 2016

Posted by petersironwood in Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, ethics, the singularity, Turing, user experience

IMG_6925

The Pros and Cons of AI Part Three: Artificial Intelligence

We have already shown in the two previous blogs why it more effective and efficient to replace eating with Artificial Ingestion and to replace sex with Artificial Insemination. In this, the third and final part, we will discuss why human intelligence should be replaced with Artificial Intelligence. The arguments, as we shall see, are mainly simple extrapolations from replacing eating and sex with their more effective and efficient counterparts.

Human “intelligence” is unpredictable. In fact, all forms of human behavior are unpredictable in detail. It is true that we can often predict statistically what people will do in general. But even those predictions often fail. It is hard to predict whether and when the stock market will go up or down or which movies will be blockbuster hits. By contrast, computers, as well know, never fail. They are completely reliable and never make mistakes. The only exceptions to this general rule are those rare cases where hardware fails, software fails, or the computer system was not actually designed to solve the problems that people actually had. Putting aside these extremely rare cases, other errors are caused by people. People may cause errors because they failed to read the manual (which doesn’t actually exist because to save costs, vendors now expect that users should look up the answers to their problems on the web) or because they were confused by the interface. In addition, some “errors” occur because hackers intentionally make computer systems operate in a way that they were not intended to operate. Again, this means human error was the culprit. In fact, one can argue that hardware errors and software errors were also caused by errors in production or design. If these errors see the light of day, then there were also testing errors. And if the project ends up solving problems that are different from the real problems, then that too is a human mistake in leadership and management. Thus, as we can see, replacing unpredictable human intelligence with predictable artificial intelligence is the way to go.

Human intelligence is slow. Let’s face it. To take a representative activity of intelligence, it takes people seconds to minutes to do simple square roots of 16 digit numbers while computers can do this much more quickly. It takes even a good artist at least seconds and probably minutes to draw a good representation of a birch tree. But google can pull up an excellent image in less than a second. Some of these will not actually be pictures of birch trees, but many of them will.

Human intelligence is biased. Because of their background, training and experience, people end up with various biases that influence their thinking. This never happens with computers unless they have been programmed to do something useful in which case, some values will have to be either programmed into it or learned through background, training and experience.

Human intelligence in its application most generally has a conscious and experiential component. When a human being is using their intelligence, they are aware of themselves, the situation, the problem and the process, at least to some extent. So, for example, the human chess player is not simply playing chess; they are quite possibly enjoying it as well. Similarly, human writers enjoy writing; human actors enjoy acting; human directors enjoy directing; human movie goers enjoy the experience of thinking about what is going on in the movie and feeling, to a large degree, what people on the screen are attempting to portray. This entire process is largely inefficient and ineffective. If humans insist on feeling things, that could all be accomplished much more quickly with electrodes.

Perhaps worst of all, human intelligence is often flawed by trying to be helpful. This is becoming less and less true, particularly in large cities and large bureaucracies. But here and there, even in these situations that should be models of blind rule-following, you occasionally find people who are genuinely helpful. The situation is even worse in small towns and farming communities where people are routinely helpful, at least to the locals. It is only when a user finds themselves interacting with a personal assistant or audio menu system with no possibility of a pass-through to a human being that they can rest assured that they will not be distracted by someone actually trying to understand and help solve their problem.

Of course, people in many professions, whether they are drivers, engineers, scientists, advertising teams, lawyers, farmers, police officers etc. will claim that they “enjoy” their jobs or at least certain aspects of them. But what difference does that make? If a robot or AI system can do 85 to 90% of the job in a fast, cheap way, why pay for a human being to do the service? Now, some would argue that a few people will be left to do the 10-15% of cases not foreseen ahead of time in enough detail to program (or not seen in the training data). But why? What is typically done, even now, is to just the let user suffer when those cases come up. It’s too cumbersome to bother with back-up systems to deal with the other cases. So long as the metrics for success are properly designed, these issues will never see the light of day. The trick is to make absolutely sure than the user has no alternative means of recourse to bring up the fact that their transaction failed. Generally, as the recent case with Yahoo shows, even if the CEO becomes aware of a huge issue, there is no need to bring it to public attention.

All things considered, it seems that “Artificial Intelligence” has a huge advantage over “Natural Intelligence.” AI can simply be defined to be 100% successful. It can save money and than money can be appropriately partitioned to top company management, shareholders, workers, and consumers. A good general formula to use in such cases is the 90-10 rule; that is, 90% of the increased profits should go to the top management and 10% should go to the shareholders.

As against increased profits, one could argue that people get enjoyment out of the thinking that they do. There is some truth to that, but so what? If people enjoy playing doctor, lawyer, and truck driver, they can still do that, but at their own expense. Why should people pay for them to do that when an AI system can do 85% of the job at nearly zero costs? Instead of worrying about that, we should turn our attention to a more profound problem: what will top management do with that extra income?

Author Page on Amazon

Turing’s Nightmares

 

 

Rules and Standards nearly Dead? 

04 Sunday Sep 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, law, speeding, the singularity, Turing

funnysign

Ever get a speeding ticket that you thought was “silly”? I certainly have. On one occasion, when I was in graduate school in Ann Arbor, I drove by a police car parked in a gas station. It was a 35 mph zone. I looked over at the police car and looked down to check my speed. Thirty-five mph. No problem. Or, so I thought. I drove on and noticed that a few seconds later, the police officer turned his car on to the same road and began following me perhaps 1/4 to 1/2 mile behind me. He quickly zoomed up and turned on his flashing light to pull me over. He claimed he had followed me and I was going 50 mph. I was going 35. I kept checking because I saw the police car in my mirror. Now, it is quite possible that the police car was traveling 50, because he caught up with me very quickly. I explained this to no avail.

The University of Michigan at that time in the late 60’s was pretty liberal but was situated in a fairly conservative, some might say “redneck”, area of Michigan. There were many clashes between students and police. I am pretty certain that the only reason I got a ticket was that I was young and sporting a beard and therefore “must be” a liberal anti-war protester. I got the ticket because of bias.

Many years later, in 1988, I was driving north from New York to Boston on Interstate 84. This particular section of road is three lanes on both sides. It was a nice clear day and the pavement was dry as well as being dead straight with no hills. The shoulders and margins near the shoulders were clear. The speed limit was 55 mph but I was going 70. Given the state of my car, the conditions and the extremely sparse traffic, as well as my own mental and physical state, I felt perfectly safe driving 70. I got a ticket. In this case, I really was breaking the law. Technically. But I still felt it was a bit unjustified. There was no way that even a deer or rabbit, let alone a runaway child could come out of hiding and get to the highway without my seeing them in time to slow down, stop, or avoid them. Years earlier I had been on a similar stretch of road in Eastern Montana and at that time there was no speed limit. Still, rules are rules. At least for now.

“The Death of Rules and Standards” by Anthony J. Casey and Anthony Niblett suggests that advances in artificial intelligence may someday soon replace rules and standards with “micro-directives” tuned to the specifics of time and circumstance which will provide the benefits of rules without the cost of either. “…we suggest…a larger trend toward context specific laws that can adapt to any situation.” This is an interesting thesis and exploring it helps shine some light on what AI likely can and cannot do as well as making us question why we humans have categories and rules at all. Perhaps AI systems could replace human bias and general laws that seem to impose unnecessary restrictions in particular circumstances.

The first quibble with their argument is that no computer, however powerful, could possibly cover all situations. Taken literally, this would require a complete and accurate theory of physics as well as human behavior as well as a knowledge of the position and state of every particle in the universe. Not even post-singularity AI will likely be able to accomplish this. I hedge with the word “likely” because it is theoretically possible that a sufficiently smart AI will uncover some “hidden pattern” that shows that our universe which seems so vast and random can in fact be predicted in detail by a small set of laws that do not depend on details. In this fantasy future, there is no “true” randomness or chaos or butterfly effect.

Fantasies aside, the first issue that must be dealt with for micro-directives to be reasonable would be to have a good set of “equivalence classes” and/or to partition away differences that do not make a difference. The position of the moons of Jupiter shouldn’t make any difference as to whether a speeding ticket should be given or whether a killing is justified. Spatial proximity alone allows us as humans to greatly diminish the number of factors that need to be considered in deciding whether or not a give action is required, permissible, poor, or illegal. If I had gone to court about the speeding ticket on I-84, I might have mentioned the conditions of the roadway and its surroundings immediately ahead. I would not have mentioned anything whatever about the weather or road conditions anywhere else on the planet as being relevant to the safety of the situation. (Notice though, that it did seem reasonable to me, and possibly to you, to mention that very similar conditions many years earlier in Montana gave rise to no speed limit at all.) This gives us a hint that what is relevant or not relevant to a given situation is non-trivially determined. In fact, the “energy crisis” of the early 70’s gave rise to the National Maximum Speed Law as part of the 1974 Federal Emergency Highway Energy Conservation Act. This enacted, among other things, a federal law limiting the speed limit to 55 mph. A New York Times article by Robert A. Hamilton cites a study done of compliance on Connecticut Interstates in 1988 showing that 85% of the drivers violated the 55 mph speed limit!

So,not only would I not received a ticket in Montana in 1972 for driving under similar conditions;  I also would not have gotten a ticket on that same exact stretch of highway for going 70 in 1972 or in 1996. And, in the year I actually got that ticket, 85% of the drivers were also breaking the speed limit. The impetus for the 1974 law was that it was supposed to reduce demand for oil; however, advocates were quick to point out that it should also improve safety. Despite several studies on both of these factors, it is still unclear how much, if any, oil was actually saved and it is also unclear what the impact on safety was. It seems logical that slower speeds should save lives. However, people may go out of their way to get to an Interstate if they can drive much faster on it. So some traffic during the 55 limit would stay on less safe rural roads. In addition, falling asleep while driving is not recommended. Driving a long trip at 70 gets you off the road earlier and perhaps before dusk while driving at 55 will keep you on the road longer and possibly in the dark. In addition, lowering the speed limit, to the extent there is any compliance does not just impact driving; it could also impact productivity. Time spent on the road is (hopefully) not time working for most people. One reason it is difficult to measure empirically the impact of slower speeds on safety is that other things were happening as well. Cars have had a number of features to make them safer over time and seat belt usage has gone up as well. They have also become more fuel efficient. Computers, even very “smart” computers are not “magic.” They cannot completely differentiate cause and effect from naturally occurring data. For that, humans or computers have to do expensive, costly, and ethically problematic field experiments.

Of course, what is true about something as simple as enforcing speed limits is equally or more problematic in other areas where one might be tempted to utilize micro-directives in place of laws. Sticking to speeding laws, micro-directives could “adjust” to conditions and avoid biases based on gender, race, and age, but they could also take into account many more factors. Should the allowable speed, for instance, be based on income? (After all a person making $250K per year is losing more money by driving more slowly than one making $25K/year). How about the reaction time of the driver? How about whether or not they are listening to the radio? As I drive, I don’t like using cruise control. I change my speed continually depending on the amount of traffic, whether or not someone in the nearby area appears to be driving erratically, how much visibility I have, how closely someone is following me and how close I have to be to the car in front and so on. Should all of these be taken into account in deciding whether or not to give a ticket? Is it “fair” for someone with extremely good vision and reaction times to be allowed to drive faster than someone with moderate vision and slow reaction times? How would people react to any such personalized micro-directives?

While the speed ticket situation is complex and could be fraught with emotion, what about other cases such as abortion? Some people feel that abortion should never be legal under any circumstances and others feel it is always the woman’s choice. Many people, however, feel that it is only justified under certain circumstances. But what are those circumstances in detail? And, even if the AI system takes into account 1000 variables to reach a “wise” decision, how would the rules and decisions be communicated?

Would an AI system be able to communicate in such a way as to personalize the manner of presentation for the specific person in the specific circumstances to warn them that they are about to break a micro-directive? In order to be “fair”, one could argue that the system should be equally able to prevent everyone from breaking a micro-directive. But some people are more unpredictable than others. What if, in order to make it so person A is 98% likely to follow the micro-directive, the AI system presents a soundtrack of a screaming child but in order to make person B 98% likely to follow the micro-directive, it only whispers a warning. Now, person B ignores the micro-directive and speeds (which would happen according to the premise 2% of the time). Wouldn’t person B, now be likely to object that if they had had the same warning, they would have not ignored the micro-directive? Conversely, person A might be so disconcerted by the warning that they end up in an accident.

Anyway, there is certainly no argument that our current system of using human judgement is prone to various kinds of conscious and unconscious biases. In addition, it also seems to be the case that any system of general laws ends up punishing people for what is actually “reasonable” behavior under the circumstances and ends up letting people off Scott-free when they do despicable things which are technically legal (absurdly rich people and corporations paying zero taxes comes to mind). Will driverless cars be followed by judge-less and jury-less courts?

Turing’s Nightmares

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 664 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...