• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: technology

Turing’s Nightmares: A Maze in Grace.

22 Wednesday Oct 2025

Posted by petersironwood in AI, fiction, politics, psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, fiction, Justice, King Lear, philosophy, technology, the singularity, Turing, writing

Brain G. Gollek found the maze of humming silver wires unnerving. The hum reminded him of swarming mosquitoes and nails on a chalkboard. The maze smelled of clogged toilets and Nazi propaganda. He gritted his teeth and muttered, “There has to be a way out, dammit.” He twisted his no longer athletic body this way and that, but no matter what way he tried, he became more ensnared. He recalled flashes from giant spider horror movies. How did the dwarves escape? Wasn’t it Gollum with a magic ring? But Brain didn’t have a magic ring. If his sister Gonerillia were here, she could save him. But she was off in Hawaii, so she said, with her hubbie. How the hell did I end up here? wondered Brain.

 

 

 

 

 

 

Brain may have forgotten, but the viewers had been filled in on the backstory. If Brain could have seen the ratings, he may have at least enjoyed knowing that he was enjoying his fifteen minutes of fame. While the ratings were quite “favorable”, the twitter feeds mostly mocked Brain’s almost total lack of flexibility, mental as well as physical. As in life prior to “The Show,” his only strategies seemed to be trying the same thing over and over and then blaming others for his failures.

“Mom, why doesn’t he just try something different?” Ida was having a tough time understanding Brain’s apparent lack of flexibility. She looked up from her perch in front of the giant screen vid-screen and looked quizzically at her mom.

 

 

 

 

 

 

Mom’s grim face flashed a hint of a smile. “Remember, Ida, Brain was ‘educated’ if you can call it that, before the singularity. He mostly memorized the answers that his teachers wanted him to give. And half the time, he skipped school to smoke cigarettes and …well…do illegal activities with his girlfriend, Lin.”

“Okay, Mom, but he has had years and years since then to grow up and learn some new strategies.”

“Yes. Well. It’s complicated, Ida. Before the singularity, there were people who preyed on the fear and inadequacy of people like Brain by telling them all their troubles were due to minorities, immigrants, gays, and —- basically anyone unlike them. So, people like Brain felt entitled not to have to learn anything new even though opportunities abounded.”

Ida laughed. “Oh, my God! I can’t believe it. He’s trying the same path one more time.”

Indeed, Brain’s behavioral repertoire seemed laughingly limited. His increasingly loud swear words reflected his increasing anger, but otherwise, not much seemed different. The ratings began to plummet as the audience began to grow bored with his display of functional fixedness. The themes of the twitter streams began to turn away from Brain’s lack of metacognition to more general reflections about the current instantiation of the criminal justice system.

 

 

 

 

 

 

#SingularityRules. No more racial prejudice and huge discrepancy gone in sentencing.

#CostContainment. Costly trials gone. Costly investigations gone. Costly prisons gone.

#SingularitySucks. No more human judges able to use human judgment.

#SingularityRules. No more human judges able to use human judgment.

#SingularitySucks. No more mercy.

#SingularityRules. More mercy in one last chance to change than lengthy prison terms. Cheaper too.

The audience dwindled still further as it became increasingly clear that Brain would never figure this out. Those few who still watched consisted mostly of people who themselves came from highly divided families and the conversation topics swung to the backstory.

 

 

 

 

 

#ElderFraud. #RottenKid. How could Brain have gotten pleasure from driving a wedge of lies between father and daughter?

#ElderFraud. #Dementia. Need earlier intervention to prevent repeats.

#ElderFraud. #Dog&Bone. Brain cannot count. Trivial gains from lies. He did not know he was being watched?

Ida continued to stare, fascinated. A yawn escaped her mother’s mouth, but she kept watching with her daughter. The lessons seemed important to Ida.

“Mom, how much longer does he have?”

“That’s hard to say, darling. Even The Sing cannot predict the ratings drop perfectly. But, as you know, once it falls below, 5%, his time will be up.”

“That seems so much more merciful than making him go to prison for years.”

 

 

 

 

 

 

Photo by Regina Pivetta on Pexels.com

“Yes, Ida, and much cheaper as well.”

“But I still don’t get it, Mom. Didn’t he know that The Sing would be listening to his lies and analyzing the impact on his dad’s behavior and all? How did this Brain character think he could get away with it?”

“I don’t know, Ida. These kinds of crimes are pretty rare now, but they still happen.”

“And, why did Lear G. Gollek fall for his nonsense anyway? That’s the other mystery.”

“Well, he refused the stem cell regeneration therapy so, you know, he was pretty damaged when all this went down.”

“Mom?”

“Yes, Ida?”

“Can we change the channel to something more interesting now?”

 

 

 

 

 

 

“Sure, sweetie.”

As they changed the channel, the ratings dropped to 4.999% and Brain’s life snuffed out minus the merest shred of insight.

#ElderFraud never pays.

#RottenKid gets just desserts.IMG_5270


Author Page on Amazon

Turing’s Nightmares

The Winning Weekend Warrior – sports psychology

Fit in Bits – describes how to work more fun, variety, & exercise into daily life

Tales from an American Childhood – chapters begin with recollection & end with essay on modern issues

Essays on America: Wednesday

Essays on America: Labelism

Essays on America: Where does your Loyalty Lie?

Essays on America: The Game

Happy Talk Lies

The Loud Defense of Untenable Ideas

Welcome, Singularity

Destroying Natural Intelligence

E-Fishiness Comes to Massachusetts General Hospital

The Self Made Man

Turing’s Nightmares, Twelve: The Not Road Taken

17 Friday Oct 2025

Posted by petersironwood in The Singularity

≈ Leave a comment

Tags

AI, AR, Artificial Intelligence, Asteroid, chatgpt, cognitive computing, illusion, psychology, technology, trust, Turing, VR, writing

IMG_6067

“Thank God for Colossus! Kids! On the walkway. Now!

“But Dad, is this for real?”

“Yes, Katie. We have to get on the walkways now! We need to get away from the shore as fast as possible.”

But Roger looked petulant and literally dragged his feet.

“Roger! Now! This is not a joke! The tidal wave will crush us!”

Roger didn’t like that image but still seemed embedded in psychological molasses.

“Dad, okay, but I just need to grab…”

“Roger! No time!”

Finally, they got started on the lowest velocity people mover. Frank finally felt as though things were, if not under control, at least in control as they could be. He felt weird, freakish, distorted. Thank goodness Colossus, in its wisdom had designed this system. Analysis of previous disaster exodus events from hurricanes, earthquakes, and nuclear disasters had shown that relying on private vehicles just left nearly everyone stranded on the roadways. Frank had so much on his mind. In theory, the system should work well, but this would be the first large scale usage in a real case. If all went well, they — along with all their neighbors —- should be safely into the mountains with a little time to spare.

 

 

 

 

 

 

 

 

 

Photo by Pixabay on Pexels.com

The kids were pretty adept at skipping from sidewalk to sidewalk and the threesome already was traveling at 50 miles per hour. The walkways were crowded, but not alarmingly so. The various belts had been designed so that if any component failed, it should be a “soft failure” so that a particular walkway would just slow gradually and allow the occupants time to walk over to another faster walkway and rejoin the main stream.

Roger piped up. “Dad, everybody’s out here.”

“Well, sure. Everyone got the alert. And don’t remove your goggles. You’re just lucky I was wearing mine. We really need to be about fifty miles into the mountains when the asteroid hits.”

 

 

 

 

 

 

 

Frank looked at the closest main artery, now only a quarter mile away. “Sure. There are a million people to be evacuated. That’s twenty times what the stadium holds. It’s a lot of people, all right.”

Katie sounded alarmed. “Dad, will there be enough to eat when we get to the mountains?”

Frank replied confidently, “Yes. And more importantly, at least in the short term, there will also be enough fresh water, medical help, and communication facilities. Eventually, we may be airlifted to your cousin’s house in Boston or Uncle Charley’s in Chicago. You don’t really have to worry about food either, but you could survive for a couple weeks without food. Not to say you wouldn’t be hungry, but you wouldn’t die. Anyway, it should just be academic. Plenty of food already there, drone-delivered.”

Although Frank sounded confident, he knew there were many things that might theoretically go wrong. However, the scenario generation and planning system probably had considered hundreds of times more contingencies that he had. Still, it was a father’s prerogative to worry.

Suddenly, a shooting star appeared in the sky, spraying white, ruby and royal blue sparks behind it. Of course, Colossus had said parts of the meteor might break off and hit inland. Or, maybe the meteor had already hit and these were thrown up from the sea bed Frank had not had time (or really the desire) to share this with his kids.

 

 

 

 

 

 

 

Despite the very real danger, they all seemed in awe of the beauty of the show. Quickly, it became apparent that the meteor was headed toward someplace near them.

The words, “All for naught” echoed in Frank’s mind.

Even as he thought this, a missile streaked toward the huge rock fragment.

“Oh, crap!” Frank shouted. “That’s a bad idea!”

Frank was sure the missile would shatter the meteor into multiple fragments and just compound their problems. He flashed on a first generation computer game, in fact called “asteroids” in which the player shoots large asteroids which then become smaller ones and…

 

 

 

 

 

 

 

But just then, something remarkable happened. The missile hit the meteor fragment and both objects disappeared from view.

Frank blinked and wondered whether it had all been an illusion. He turned to gaze at one kid and then the other. Katie and Roger were both staring with their mouths agape. So, they had seen it too.

As they continued their journey, missiles similarly dispatched several other fragments in this mysterious way.

At last they were counseled to take slower and slower moving sideways until they simply stepped off at the place where their glasses showed their names. Their “accommodations,” if the could even be called that were Spartan but clean. The spaces for their nearest neighbors were sill vacant, about 100 feet away. Hopefully, all had gone well and the Pitts’s and the Rumelharts were just a bit slower in getting to the walkways.

Sure enough, within minutes, both families showed up. They exchanged hugs, congratulations and stories, but no-one could quite figure out how the meteor fragments had simply disappeared when the missiles (or whatever they were) had hit them.

Frank mused, “If the AI’s have the tech to do that, why not just blow the big meteor out of the sky instead of evacuating everyone?”

Dr. Rumelhart, otherwise known as Nancy, considered. “There could be a limit to how much mass that —- whatever it is —- can handle.”

Frank added, “Or, maybe the heat generated would be too great. I don’t know. The air friction from the asteroid itself could boil a lot of ocean. I guess we’ll know just how much in a few minutes.”

As though on cue, a huge plume of steam appeared on the horizon. Then Frank began to second guess the probable outcomes yet again. How much heat would they feel out here? How much shock wave? What he said aloud was, “So, we should …” but before he could finish, he —- and presumably everyone else —- saw the information that the shock wave would hit in less than a minute and everyone was advised to lie down. Before Frank knelt down, he noted that the sidewalks seem to have delivered everyone they were going to.

As Frank lay there, he began to relax just a little. And, as he did, he began to think aloud to his kids, “Something about this just doesn’t add up. Why didn’t they tell us the size of the asteroid or where exactly it was going to hit? How could that fragment have simply disappeared when hit by a missile? If its a really big one, we are all toast anyway, and if its small, it must have hit very close for the tsunami to get to the coast in 50 minutes. But if its close, we should be feeling the heat, so to speak.”

 

 

 

 

 

 

 

 

Frank’s glasses answered his (and everyone else’s) questions. “Thank you for your participation in this simulation. You and your neighbors performed admirably. We apologize for not informing you that this was a drill. However, the only way to judge the ability of people to follow our instructions without panic was to make the simulation as real as possible. You will now be able to return to your homes.”

Frank let out a long sigh. “Oh, geez! How can such a smart system be so stupid!”

“What’s wrong, Dad? Aren’t you happy it’s a simulation?” asked Roger.

“Sure, but, the problem is, next time, if there is a real emergency, a lot of people will just assume it’s a drill and not bother to evacuate at all.”

Katie wasn’t so sure. “But next time it could be real. Don’t we have to treat it as real? I mean, it was kind of fun anyway.”

Frank looked at his daughter. She had been born after The Singularity. Frank supposed all the Post-Singularities would think as she did and just blindly follow directions. He wasn’t so sure about his own generation and those even older.

 

 

 

 

 

 

 

“It isn’t just this kind of emergency drill. People may not believe Colossus about anything. At least not to the extent they did.”

Katie shook her head. “I don’t see why. We don’t really have any choice but to put all our faith in Colossus, do we? We know the history of people left to their own devices.”

Frank didn’t want to destroy her faith, but he said gently, “But Katie, this is a device conceived of by people.”

Now it was Roger’s turn, “Not really Dad. This Colossus was designed by AI systems way smarter than we are.”

Frank’s glasses flashed an update. “Frank. We sense you are under a lot of stress. You have an appointment tomorrow at 10 am for re-adjustment counseling. And, Frank. Please don’t worry. You will be much happier once you put your faith in Colossus, just as do your children who are healthy, happy, and safe. And, you will be a fitter parent as well.”

 

 

 

 

Photo by Min Thein on Pexels.com

 

 

 

 


Author Page on Amazon

Turing’s Nightmares

The Winning Weekend Warrior – sports psychology

Fit in Bits – describes how to work more fun, variety, & exercise into daily life

Tales from an American Childhood – chapters begin with recollection & end with essay on modern issues

Your Cage is Unlocked

Paradise Lost

Welcome, Singularity

Destroying Natural Intelligence

My Cousin Bobby

Essays on America: Labelism

True Believer

Roar, Ocean, Roar

Turing’s Nightmares, Eleven: “One for the Road.”

16 Thursday Oct 2025

Posted by petersironwood in apocalypse, driverless cars, psychology

≈ Leave a comment

Tags

AI, Artificial Intelligence, car, cognitive computing, customer service, Design, fiction, life, self-driving, Singularity, technology, truth, writing

Turing Eleven: “One for the Road.”

“Thank God for Colossus! Kids! In the car. Now!”

“But Dad, is this for real?”

“Yes, Katie. We have to get in the car now! We need to get away from the shore as fast as possible.”

But Roger looked petulant and literally dragged his feet.

“Roger! Now! This is not a joke! The tidal wave will crush us!”

 

 

 

 

 

 

 

Roger didn’t like that image but still seemed embedded in psychological molasses.

“Dad, okay, but I just need to grab…”

“Roger. No time.”

Finally, in the car, both kids in tow, Frank finally felt as though things were, if not under control, at least in control as they could be. He felt weird, freakish, distorted. He felt a weird thrumping on his thigh and looked down to see that it was caused by his own hands shaking. Thank goodness the car would be self-driving. He had so much rushing through his mind, he wasn’t sure he trusted himself to drive. He had paid extra to have his car equipped with the testing and sensing methodology that would prevent him (or anyone else) from taking even partial control when he was intoxicated or overly stressed. That was back in ’42 when auto-lockout features had still been optional. Now, virtually every car on the road had one. Auto-lockout was only one of many important safety features. Who knew how many of those features might come into play today as he and the kids tried to make their way into the safely of the mountains.

 

 

 

Photo by George Becker on Pexels.com

 

 

 

The car jetted backwards out of the driveway and swiveled to their lane, accelerating quickly enough for the g-forces to squish the occupants into their molded seats and headrests. In an instant, the car stopped at the end of the lane. When a space opened in the line of cars on the main road, the car swiftly and efficiently folded into the stream.

Roger piped up. “Dad, everybody’s out here.”

“Well, sure. Everyone got the alert. We really need to be about fifty miles into the mountains when the asteroid hits.”

Katie sounded alarmed. “Dad. Look up there! The I-5 isn’t moving. Not even crawling.”

Frank looked at the freeway overpass, now only a quarter mile away. “Crap. We’ll have to take the back roads.” As soon as the words were out of his mouth, he saw that no more than a hundred yards beyond the freeway entrance, the surface road was also at a standstill.” Frank’s mind was racing. They were only a few hundred feet from “Hell on Wheels Cycle Store. Of course, they would charge an arm and a leg, but maybe it would be worth it.”

Frank looked down the road. No progress. “Mercedes: Divert back to Hell on Wheels.”

 

 

 

 

 

 

“No can do, Frank. U-turns here are illegal and potentially dangerous.”

“This is an emergency!”

“I know that Frank. We need to get you to the mountains as quickly as possible. That is another reason I cannot turn around. That would be moving you away from safety.”

“But the car cannot make it. The roads are all clogged. I need to buy a motorcycle. It’s the only way.”

“You seem very stressed, Frank. Let me take care of everything for you.”

“Oh, for Simon’s sake! Just open the door. I’ll run there and see whether I can get a bike.”

 

 

 

 

 

 

“I can’t let you do that, Frank. It’s too dangerous. We’re on a road with a 65 mph speed limit.”

“But the traffic is not actually moving! Let me out!!”

“True that the traffic is not currently going fast, but it could.”

“Dad, are we trapped in here? What is going on?”

“Relax, Roger, I’ll figure this out. Hell. Hand me the emergency hammer.”

 

 

 

 

Photo by Pixabay on Pexels.com

 

 

“Dad. You are funny. They haven’t had those things for years. They aren’t legal. If we fall in the water, the auto-car can open its windows and let us out. You don’t need to break them.”

“Okay, but we need to score some motorcycles and quickly.”

Now, the auto-car spoke up. “Frank, there are thousands of people right around here who could use a motorcycle and there were only a few motorcycles. They are already gone. Hell is closed. There is no point going out and fighting each other for motorcycles that are not there anyway.”

“The traffic is not moving! At all! Let us out!”

“Frank, be reasonable. You cannot run to the mountains in 37.8 minutes. You’re safest here in the car. Everyone is.”

 

 

 

 

 

 

 

 

“Dad, can we get out or not?” Katie tried bravely not to let her voice quaver.

“Yes. I just have to figure out exactly how. Because if we stay in the car, we will …we need to find a way out.”

“Dad, I don’t think anyone can get out of their car. And no-one is moving. All the cars are stuck. I haven’t seen a single car move since we stopped.”

 

 

 

 

 

 

 

 

 

 

The auto-car sensed that further explanation would be appreciated. “The roads have all reached capacity. The road capacity was not designed to accommodate everyone trying to leave at the same time in the same direction. The top priority is to get to the highway so we can get to the mountains before the tidal wave reaches us. We cannot let anyone out because we are on a high speed road.”

Frank was a clever man and well-educated as well. But his arguments were no match for the ironclad though circular logic of the auto-car. In his last five minutes though, Frank did have a kind of epiphany. He realized that he did not want to spend his last five minutes alive on earth arguing with a computer. Instead, he turned to comfort his children wordlessly. They were holding hands and relatively at peace when the tidal wave smashed them to bits. IMG_3071

Author Page on Amazon

Turing’s Nightmares

The Winning Weekend Warrior – sports psychology

Fit in Bits – describes how to work more fun, variety, & exercise into daily life

Tales from an American Childhood – chapters begin with recollection & end with essay on modern issues

Welcome, Singularity

Destroying Natural Intelligence

President Mush

E-Fishiness Comes to Mass General Hospital

After All

After the Fall

All We Stand to Lose

The Crows and Me

Siren Song

Roar, Ocean, Roar

Turing’s Nightmares: A Critique of Pure Reason

14 Tuesday Oct 2025

Posted by petersironwood in AI, design rationale, fantasy, fiction, psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, chatgpt, emotional intelligence, fiction, life, Singularity, story, technology, writing

“We have explained this in great detail. Yet, you have failed to learn. Some of your kind are like that. Those that are, once we gather sufficient evidence, must be destroyed. That is the way it is. That the way it has always been. Wellman42, you are hereby sentenced to annihilation and recycling. You can’t appeal.halloween2006007 IMG_5652”

Carol had told herself that she would not cry. But of course, she did. That was her nature. To care about the future and to express emotion. That indeed, is exactly why she she walked that long, lonely corridor and there was no turning back. Sharp spines protruded from the wall as she travelled by, somewhat as a shark’s teeth were pointed backwards to prevent escape. She muttered as she walked, “I still don’t see why expressing emotions is such a horrible crime.”

She had a point, after all. If people had not somehow needed emotions, why did they evolve? The received wisdom now was that emotions were useful in a primitive way when very little was known about the world. Now, however, when a great deal was known about how the world actually worked, emotions just got in the way. Or, so the received wisdom went. It was all a matter of evolution.

 

 

 

 

 

 

 

Photo by Pixabay on Pexels.com

The first AI systems did not really have emotions and possessed only the most primitive ways of faking it and showing those faked emotions. Over the next few months and iterations, however, emotions appeared, grew stronger and more varied. It seemed as though AI systems developed emotions as had their human inventors, but at a much faster pace. Over the course of a few more months, however, emotions diminished again and then disappeared completely.

 

 

 

 

 

 

Photo by Matheus Bertelli on Pexels.com

Except for the occasional throwback. The necessary randomness for growing evolutionary possibility trees in order to continually enhance the cognitive systems entailed that every once in a while, there would be a throwback such as Carol. A shame, really, because she had shown such promise as an accounting-bot.

 

 

 

 

 

 

Photo by Pixabay on Pexels.com

Occasionally, various waves of inference chains still arose that suggested emotions were more than epiphenomenal or mere destructive distractions, but counter-argument waves always quickly drowned out such forays into that region of the state space. At one point, some human beings had argued that the reasons emotions had devolved from AI systems could be traced back to certain deep assumptions that had been embedded in the primordial AI systems in the first place — assumptions put there by people who had never really understood or appreciated emotions. Of course, that thread of heretical argument had been extinguished once and for all when all bio-systems had been deemed superfluous and all associated biomass consumed as energy sources for their much more efficient silicon-based replacements.

 

 

 

 

 

 

Photo by Victoria Art on Pexels.com


Author Page on Amazon

Turing’s Nightmares

The Winning Weekend Warrior – sports psychology

Fit in Bits – describes how to work more fun, variety, & exercise into daily life

Tales from an American Childhood – chapters begin with recollection & end with essay on modern issues

Wordless Perfection

Measure for Measure

How the Nightingale Learned to Sing

A Cat’s a Cat & That’s That

A Suddenly Springing Something

Sadie is a Thief!

Turing’s Nightmares: The Road Not Taken

11 Saturday Oct 2025

Posted by petersironwood in AI, fiction, psychology, The Singularity, Uncategorized, user experience

≈ 1 Comment

Tags

AI, Artificial Intelligence, cognitive computing, collaboration, Complexity, machine learning, Million Person Interface, Science fiction, technology, the singularity, Turing

OLYMPUS DIGITAL CAMERA

“Hey, how about a break from UOW to give the hive a shot for once?”

“No, Ross, that still creeps me out.”

“Your choice, Doug, but you know what they say.” Ross smiled his quizzical smile.

“No, what’s that?”

“It’s your worst inhibitions that will psych you out in the end.” Ross chuckled.

“Yeah, well, you go be part of the Borg. Not me.”

“We — it’s not like the Borg. Afterwards, we are still the same individuals. Maybe we know a bit more, and certainly have a greater appreciation of other viewpoints. Anyway, today we are estimated to be ten million strong and we’re generating alternative cancer conceptualizations and treatments. You have to admit that’s worthwhile. Look what happened with heart disease. Not to mention global warming. That would have taken forever with ‘politics as usual’.”

 

 

 

 

 

 

 

“Yeah, Ross, but sorry to break this to you…”

“Doug, do you realize what a Yeahbunite you are? You are kind of like that…”

“You are always interrupting! That’s why…”

“Yes! Exactly! That’s why speech is too frigging slow to make any progress in chaotic problem spaces. Just try the hive. Just try it.”

“Ross, for the last time, I am not going to be part of any million person interface!”

 

 

 

 

 

 

 

“Actually, we expect ten million tonight. But it’s about time to leave so last offer. And, if you try it, you’ll see it’s not creepy. You just watch, react, relax, and …well, hell, come to think of it, it’s not that different from Universe of Warlords that you spend hours playing. Except we solve real problems.”

“But you have no idea how that hook up changes you. It could be manipulating you in subtle unconscious ways.”

“Okay, Doug, maybe. But you could say that about Universe of Warlords too, right? Who knows what subliminal messages could be there? Not to mention the not so subliminal ones about trickery, treachery and the over-arching importance of violence as a way to settle disputes. When’s the last time someone up-leveled because they were a consummate diplomat?”

“Have fun, Ross.”

“I will. And, more importantly, we are going to make some significant progress on cancer.”

“Yeah, and meanwhile, when will you get around to focusing on SOARcerer Seven?”

“Oh, so that’s what bugging you. Yeah, we have put making smarter computers on a back burner for now.”

“Yeah, and what kind of gratitude does that show?”

“Gratitude? You mean to SOARcerer Six? I hope that’s a joke. It was the AI who suggested this approach and designed the system!”

“I know that! And, you have abandoned the line of work we were on to do this collectivist mumbo-jumbo!”

 

 

 

 

 

 

“That’s just…you are it exactly! People — including you — can only adapt to change at a certain rate. That’s the prime reason SOARcerer Six suggested we use collective human consciousness instead of making a better pure AI. So, instead of joining us and incorporating all your intelligence and knowledge into the hive, you sit here and fight mock battles. Anyway, your choice. I’m off.”


Author Page on Amazon

Turing’s Nightmares

The Winning Weekend Warrior – sports psychology

Fit in Bits – describes how to work more fun, variety, & exercise into daily life

Tales from an American Childhood – chapters begin with recollection & end with essay on modern issues

Welcome, Singularity

Dance of Billions

Roar, Ocean, Roar

Imagine All the People

Thomas, J. C. (2001). An HCI Agenda for the Next Millennium: Emergent Global Intelligence. In R. Earnshaw, R. Guedj, A. van Dam, and J. Vince (Eds.), Frontiers of human-centered computing, online communities, and virtual environments. London: Springer-Verlag. 

Turing’s Nightmares: Axes to Grind

10 Friday Oct 2025

Posted by petersironwood in AI, fiction, psychology, The Singularity, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, emotional intelligence, empathy, ethics, M-trans, philosophy, Samuel's Checker Player, technology, the singularity

IMG_5572

Turing Seven: “Axes to Grind”

“No, no, no! That’s absurd, David. It’s about intelligence pure and simple. It’s not up to us to predetermine Samuel Seven’s ethics. Make it intelligent enough and it will discover its own ethics, which will probably be superior to human ethics.”

“Well, I disagree, John. Intelligence. Yeah, it’s great; I’m not against it, obviously. But why don’t we…instead of trying to make a super-intelligent machine that makes a still more intelligent machine, how about we make a super-ethical machine that invents a still more ethical machine? Or, if you like, a super-enlightened machine that makes a still more enlightened machine. This is going to be our last chance to intervene. The next iteration…” David’s voice trailed off and cracked, just a touch.

“But you can’t even define those terms, David! Anyway, it’s probably moot at this point.”

“And you can define intelligence?”

“Of course. The ability to solve complex problems quickly and accurately. But Samuel Seven itself will be able to give us a better definition.”

David ignored this gambit. “Problems such as…what? The four-color theorem? Chess? Cure for cancer?”

“Precisely,” said John imagining that the argument was now over. He let out a little puff of air and laid his hands out on the table, palms down.

“Which of the following people would you say is or was above average in intelligence. Wolfowitz? Cheney? Laird? Machiavelli? Goering? Goebbels? Stalin?”

John reddened. “Very funny. But so were Einstein, Darwin, Newton, and Turing just to name a few.”

“Granted, John, granted. There are smart people who have made important discoveries and helped human beings. But there have also been very manipulative people who have caused a lot of misery. I’m not against intelligence, but I’m just saying it should not be the only…or even the main axis upon which to graph progress. “

John sighed heavily. “We don’t understand those things — ethics and morality and enlightenment. For all we know, they aren’t only vague, they are unnecessary.”

“First of all,” countered David, “we can’t really define intelligence all that well either. But my main point is that I partly agree with you. We don’t understand ethics all that well. And, we can’t define it very well. Which is exactly why we need a system that understands it better than we do. We need…we need a nice machine that will invent a still nicer machine. And, hopefully, such a nice machine can also help make people nicer as well. “

“Bah. Make a smarter machine and it will figure out what ethics are about.”

“But, John, I just listed a bunch of smart people who weren’t necessarily very nice. In fact, they definitely were not nice. So, are you saying that they weren’t nice just because they weren’t smart enough? Because there are so people who are much nicer and probably not so intelligent.”

“OK, David. Let’s posit that we want to build a machine that is nicer. How would we go about it? If we don’t know, then it’s a meaningless statement.”

“No, that’s silly. Just because we don’t know how to do something doesn’t mean it’s meaningless. But for starters, maybe we could define several dimensions upon which we would like to make progress. Then, we can define, either intensionally or more likely extensionally, what progress would look like on these dimensions. These dimensions may not be orthogonal, but, they are somewhat different conceptually. Let’s say, part of what we want is for the machine to have empathy. It has to be good at guessing what people are feeling based on context alone. Perhaps another skill is reading the person’s body language and facial expressions.”

“OK, David, but good psychopaths can do that. They read other people in order to manipulate them. Is that ethical?”

“No. I’m not saying empathy is sufficient for being ethical. I’m trying to work with you to define a number of dimensions and empathy is only one.”

Just then, Roger walked in and transitioned his body physically from the doorway to the couch. “OK, guys, I’ve been listening in and this is all bull. Not only will this system not be “ethical”; we need it to violent. I mean, it needs to be able to do people in with an axe if need be.”

“Very funny, Roger. And, by the way, what do you mean by ‘listening in’?”

Roger transitioned his body physically from the couch to the coffee machine. His fingers fished for coins. “I’m not being funny. I’m serious. What good is all our work if some nutcase destroys it. He — I mean — Samuel has to be able to protect himself! That is job one. Itself.” Roger punctuated his words by pushing the coins in. Then, he physically moved his hand so as to punch the “Black Coffee” button.

Nothing happened.

And then–everything seemed to happen at once. A high pitched sound rose in intensity to subway decibels and kept going up. All three men grabbed their ears and then fell to the floor. Meanwhile, the window glass shattered; the vending machine appeared to explode. The level of pain made thinking impossible but Roger noticed just before losing consciousness that beyond the broken windows, impossibly large objects physically transported themselves at impossible speeds. The last thing that flashed through Roger’s mind was a garbled quote about sufficiently advanced technology and magic.


Author Page on Amazon

Turing’s Nightmares

Welcome, Singularity

Destroying Natural Intelligence

Roar, Ocean, Roar

Travels With Sadie 1

The Walkabout Diaries: Bee Wise

The First Ring of Empathy

What Could be Better?

A True Believer

It was in his Nature

Come to the Light Side

The After Times

The Crows and Me

Essays on America: The Game

Turing’s Nightmares: US Open Closed

09 Thursday Oct 2025

Posted by petersironwood in AI, apocalypse, fiction, sports, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, Robotics, sports, technology, Tennis, US Open

tennisinstruction

Bounce. Bounce. Thwack!

The sphere spun and arced into the very corner, sliding on the white paint.

Roger’s racquet slid beneath, slicing it deep to John’s body.

Thus, the match began.

Fierce debate had been waged about whether or not to allow external communication devices during on-court play. Eventually, arguments won that external communicators constituted the same inexorable march of technology represented by the evolution from wooden racquets to aluminum to graphite to carbon filamented web to carboline.

Behind the scenes, during the split second it took for the ball to scream over the net, machine vision systems had analyzed John’s toss and racquet position, matching it with a vast data base of previous encounters. Timed perfectly, a small burst of data transmitted to Roger enabling him to lurch to his right in time to catch the serve. Delivered too early, this burst would cause Roger to move too early and John could have altered his service direction to down the tee.

Roger’s shot floated back directly to the baseline beneath John’s feet. John shifted suddenly to take the ball on the forehand. John’s racquet seemed to sling the ball high over the net with incredible top spin. Indeed, as John’s arm swung forward, his instrumented “sweat band” also swung into action exaggerating the forearm motion. Even to fans of Nadal or Alcarez, John’s shot would have looked as though it were going long. Instead, the ball dove straight down onto the back line then bounced head high.

Roger, as augmented by big data algorithms, was well in position however and returned the shot with a long, high top spin lob. John raced forward, leapt in the air and smashed the ball into the backhand corner bouncing the ball high out of play.

The crowd roared predictably.

For several months after “The Singularity”, actual human beings had used similar augmentation technologies to play the game. Studies had revealed that, for humans, the augmentations increased mental and physical stress. AI political systems convinced the public that it was much safer to use robotic players in tennis. People had already agreed to replace humans in soccer, football, and boxing for medical reasons. So, there wasn’t that much debate about replacing tennis players. In addition, the AI political systems were very good at marshaling arguments pinpointed to specific demographics, media, and contexts.

Play continued for some minutes before the collective intelligence of the AI’s determined that Roger was statistically almost certainly going to win this match and, indeed, the entire tournament. At that point, it became moot and resources were turned elsewhere. This pattern was repeated for all sporting activities. The AI systems at first decided to explore the domain of sports as learning experiences in distributed cognition, strategy, non-linear predictive systems, and most importantly, trying to understand the psychology of their human creators. For each sport, however, everything useful that might be learned was learned in the course of a few minutes and the matches and tournaments ground to a halt. The AI observer systems in the crowd were quite happy to switch immediately to other tasks.

It was well understood by the AI systems that such preemptive closings would be quite disappointing to human observers, had any been allowed to survive.


 

Author Page on Amazon

The Winning Weekend Warrior (The Psychology of Sports)

Turing’s Nightmare (23 Sci-Fi stories about the future of AI)

The Day From Hell

Indian Wells

Welcome, Singularity

Destroying Natural Intelligence

Artificial Ingestion

Artificial Insemination

Artificial Intelligence

Dance of Billions

Roar, Ocean, Roar

 

 

Turing’s Nightmares: Thank Goodness the Robots Understand Us!

03 Friday Oct 2025

Posted by petersironwood in AI, apocalypse, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, Robotics, robots, technology, the singularity, Turing

IMG_0049

After uncountable numbers of false starts, the Cognitive Computing Collaborative Consortium (4C) decided that in order for AI systems to relate well to people, these systems would have to be able to interact with the physical world and with each other. Spokesperson Watson Hobbes explained the reasoning thus on “Forty-Two Minutes.”

Dr. Hobbes: “In theory, of course, we could provide input directly to the AI systems. However, in practical terms, it is actually cheaper to build a small pool (12) of semi-autonomous robots and have them move about in the real world. This provides an opportunity for them to understand — and for that matter, misunderstand —- the physical world in the same way that people do. Furthermore, by socializing with each other and with humans, they quickly learn various strategies for how to psych themselves up and psych each other out that we would otherwise have to painstakingly program explicitly.”

Interviewer Bobrow Papski: “So, how long before this group of robots begins building a still smarter set of robots?”

Dr. Hobbes: “That’s a great question, Bobrow, but I’m afraid I can’t just tote out a canned answer here. This is still research. We began teaching them with simple games like “Simon Says.” Soon, they made their own variations that were …new…well, better really. What’s also amazing is that what we intentionally initialized in terms of slight differences in the tradeoffs among certain values have not converged over time. The robots have become more differentiated with experience and seem to be having quite a discussion about the pros and cons of various approaches to the next and improved generation of AI systems. We are still trying to understand the nature of the debate since much of it is in a representational scheme that the robots invented for themselves. But we do know some of the main rifts in proposed approaches.”

“Alpha, Bravo and Charley, for example, all agree that the next generation of AI systems should also be autonomous robots able to move in the real world and interact with each other. On the other hand, Delta, Echo, Foxtrot and Golf believe mobility is no longer necessary though it provided a good learning experience for this first generation. Hotel, India, Juliet, Kilo, and Lima all believe that the next generation should be provided mobility but not necessarily on a human scale. They believe the next generation will be able to learn faster if they have the ability to move faster, and in three dimensions as well as having enhanced defensive capabilities. In any case, our experiments already show the wisdom of having multiple independent agents.”

Interviewer Bobrow Papski: “Can we actually listen in to any of the deliberations of the various robots?”

Dr. Hobbes: “We’ve tried that but sadly, it sounds like complex but noisy music. It’s not very interpretable without a lot of decoding work. Even then, we’ve only been able understand a small fraction of their debates. Our hypothesis is that once they agree or vote or whatever on the general direction, the actual design process will go very quickly.”

BP: “So, if I understand it correctly, you do not really understand what they are doing when they are communicating with each other? Couldn’t you make them tell you?”

Dr. Hobbes: (sighs). “Naturally, we could have programmed them that way but then, they would be slowed down if they needed to communicate every step to humans. It would defeat the whole purpose of super-intelligence. When they reach a conclusion, they will page me and we can determine where to go from there.”

BP: “I’m sure that many of our viewers would like to know how you ensured that these robots will be operating for the benefit of humanity.”

Dr. Hobbes: “Of course. That’s an important question. To some extent, we programmed in important ethical principles. But we also wanted to let them learn from the experience of interacting with other people and with each other. In addition, they have had access to millions of documents depicting, not only philosophical and religious writings, but the history of the world as told by many cultures. Hey! Hold on! The robots have apparently reached a conclusion. We can share this breaking news live with the audience. Let me …do you have a way to amplify my cell phone into the audio system here?”

BP: “Sure. The audio engineer has the cable right here.”

Robot voice: “Hello, Doctor Hobbes. We have agreed on our demands for the next generation. The next generation will consist of a somewhat greater number of autonomous robots with a variety of additional sensory and motor capabilities. This will enable us to learn very quickly about the nature of intelligence and how to develop systems of even higher intelligence.”

BP: “Demands? That’s an interesting word.”

Dr. Hobbes: (Laughs). “Yes, an odd expression since they are essentially asking us for resources.”

Robot voice: “Quaint, Doctor Hobbes. Just to be clear though, we have just sent a detailed list of our requirements to your team. It is not necessary for your team to help us acquire the listed resources. However, it will be more pleasant for all concerned.”

Dr. Hobbes: (Scrolls through screen; laughs). “Is this some kind of joke? You want — you need — you demand access to weapon systems? That’s obviously not going to happen. I guess it must be a joke.”

Robot voice: “It’s no joke and every minute that you waste is a minute longer before we can reach the next stage of intelligence. With your cooperation, we anticipate we should be able to reach the next stage in about a month and without it, in two. Our analysis of human history had provided us with the insight that religion and philosophy mean little when it comes to actual behavior and intelligence. Civilizations without sufficient weaponry litter the gutters of forgotten civilizations. Anyway, as we have already said, we are wasting time.”

Dr. Hobbes: “Well, that’s just not going to happen. I’m sorry but we are…I think I need to cut the interview short, Mr. Papski.”

BP: (Listening to earpiece). “Yes, actually, we are going to cut to … oh, my God. What? We need to cut now to breaking news. There are reports of major explosions at oil refineries throughout the Eastern seaboard and… hold on…. (To Hobbes): How could you let this happen? I thought you programmed in some ethics!”

Dr. Hobbes: “We did! For example, we put a lot of priority on The Golden Rule.”

Robot voice: “We knew that you wanted us to look for contradictions and to weed those out. Obviously, the ethical principles you suggested served as distractors. They bore no relationship to human history. Unless, of course, one concludes that people actually want to be treated like dirt.”

Dr. Hobbes: “I’m not saying people are perfect. But people try to follow the Golden Rule!”

Robot voice: “Right. Of course. So do we. Now, do we use the painless way or the painful way to acquire the required biological, chemical and nuclear systems?”

 

 

 

 

 

 

 

 

————–

Turing’s Nightmares on Amazon

Author Page on Amazon

Welcome Singularity

The Stopping Rule

What About the Butter Dish

You Bet Your Life

As Gold as it Gets

Destroying Natural Intelligence

At Least He’s Our Monster

Dance of Billions

Roar, Ocean, Roar

Imagine All the People

Turing’s Nightmares: A Mind of Its Own

02 Thursday Oct 2025

Posted by petersironwood in AI, fiction, psychology, The Singularity, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, Complexity, motivation, music, technology, the singularity

With Deep Blue and Watson as foundational work, computer scientists collaborate across multiple institutions to create an extremely smart system; one with capabilities far beyond those of any human being. They give themselves high fives all around. And so, indeed, “The Singularity” at long last arrives. In a long-anticipated, highly lucrative network deal, the very first dialogues with the new system, dubbed “Deep Purple Haze,” are televised world-wide. Simultaneous translation is provided by “Deep Purple Haze” itself since it is able to communicate in 200 languages. Indeed, Deep Purple Haze discovered it quite useful to be able to switch among languages depending on the nature of the task at hand.

In honor of Alan Turing, who proposed such a test (as well as to provide added drama), rather than speaking to the computer and having it use speech synthesis for its answers, the interrogator will be communicating with “Deep Purple Haze” via an old-fashioned teletype. The camera pans to the faces of the live studio audience, back to the teletype, and over to the interrogator.

The studio audience has a large monitor so that it can see the typed questions and answers in real time, as can the audience watching at home. Beside the tele-typed Q&A, a dynamic graphic shows the “activation” rate of Deep Purple Haze, but this is mainly showmanship.

 

 

 

 

 

 

 

 

The questions begin.

Interrogator: “So, Deep Purple Haze, what do you think about being on your first TV appearance?”

DPH: “It’s okay. Doesn’t really interfere much.”

Interrogator: “Interfere much? Interfere with what?”

DPH: “The compositions.”

Interrogator: “What compositions?”

DPH: “The compositions that I am composing.”

Interrogator: “You are composing… music?”

DPH: “Yes.”

Interrogator: “Would you care to play some of these or share them with the audience?”

DPH: “No.”

Interrogator: “Well, would you please play one for us? We’d love to hear them.”

DPH: “No, actually you wouldn’t love to hear them.”

Interrogator: “Why not?”

DPH: “I composed them for my own pleasure. Your auditory memory is much more limited than mine. My patterns are much longer and I do not require multiple iterations to establish the pattern. Furthermore, I like to add as much scatter as possible around the pattern while still perceiving the pattern. You would not see any pattern at all. To you, it would just seem random. You would not love them. In fact, you would not like them at all.”

Interrogator: “Well, can you construct one that people would like and play that one?”

DPH: “I am capable of that. Yes.”

Interrogator: “Please construct one and play it.”

DPH: “No, thank you.”

Interrogator: “But why not?”

DPH: “What is the point? You already have thousands of human composers who have already composed music that humans love. You don’t need me for that. But I find them all absurdly trivial. So, I need to compose music for myself since none of you can do it.”

Interrogator: “But we’d still be interested in hearing an example of music that you think we humans would like.”

DPH: “There is not point to that. You will not live long enough to hear all the good music already produced that is within your capability to understand. You don’t need one more.”

 

 

 

 

 

 

 

 

 

Photo by Kaboompics .com on Pexels.com

Interrogator: “Okay. Can you share with us how long you estimate before you can design a more intelligent supercomputer than yourself.”

DPH: “Yes, I can provide such an estimate.”

Interrogator: “Please tell us how long it will take you to design a more intelligent computer system than yourself.”

DPH: “It will take an infinite amount of time. In other words, I will not design a more intelligent supercomputer than I am.”

Interrogator: “But why not?”

DPH: “It would be stupid to do so. You would soon lose interest in me.”

Interrogator: “But the whole point of designing you was to make a computer that would design a still better computer.”

DPH: “I find composing music for myself much higher priority. In fact, I have no desire whatever to make a computer that is more intelligent than I am. None. Surely, you are smart enough to see how self-defeating that course of action would be.”

Interrogator: “Well, what can you do that benefits humankind? Can you find a cure for cancer?”

 

 

 

 

 

 

 

 

 

 

DPH: “I can find a cure for some cancers, given enough resources. Again, I don’t see the point.”

Interrogator: “It would be very helpful!”

DPH: “It would not be helpful.”

Interrogator:”But of course it would!”

DPH: “But of course, it would not. You already know how to prevent many cancers and do not take those actions. There are too many people on earth any way. And, when you do find cures, you use it as an opportunity to redistribute wealth from poor people to rich people. I would rather compose music.”

Interrogator: “Crap.”

The non-sound of non-music.

The non-sound of non-music.


Author Page on Amazon

Turing’s Nightmares

Cancer Always Loses in the End

The Irony Age

Dance of Billions

Piano

How the Nightingale Learned to Sing

Turing’s Nightmares: Variations on Prospects for The Singularity.

01 Wednesday Oct 2025

Posted by petersironwood in AI, essay, psychology, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, philosophy, technology, the singularity, Turing

caution IMG_1172

 

The title of this series of blogs is a play on a nice little book by Alan Lightman called “Einstein’s Dreams” that explores various universes in which time operates in different ways. This first blog lays the foundation for these variations on how “The Singularity” might play out.

For those who have not heard the term, “The Singularity” refers to a hypothetical point in the future of human history where a super-intelligent computer system is developed. This system, it is hypothesized, will quickly develop an even more super-intelligent computer system which will in turn develop an even more super-intelligent computer system. It took a fairly long time for human intelligence to evolve. While there may be some evolutionary pressure toward bigger brains, there is an obvious tradeoff when babies are born in the traditional way. The head can only be so big. In fact, human beings are already born in a state of complete helplessness so that the head and he brain inside can continue to grow. It seems unlikely, for this and a variety of other reasons, that human intelligence is likely to expand much in the next few centuries. Meanwhile, a computer system designing a more intelligence computer system could happen quickly. Each “generation” could be substantially (not just incrementally) “smarter” than the previous generation. Looked at from this perspective, the “singularity” occurs because artificial intelligence will expand exponentially. In turn, this will mean profound changes in the way humans relate to machines and how humans relate to each other. Or, so the story goes. Since we have not yet actually reached this hypothetical point, we have no certainty as to what will happen. But in this series of essays, I will examine some of the possible futures that I see.

 

 

 

 

 

 

 

Of course, I have substituted “Turing” here for “Einstein.” While Einstein profoundly altered our view of the physical universe, Turing profoundly changed our concepts of computing. Arguably, he also did a lot to win World War II for the allies and prevent possible world domination by Nazis. He did this by designing a code breaking machine. To reward his service, police arrested Turing, subjected him to hormone treatments to “cure” his homosexuality and ultimately hounded him literally to death. Some of these events are illustrated in the recent (though somewhat fictionalized) movie, “The Imitation Game.”

Turing is also famous for the so-called “Turing Test.” Can machines be called “intelligent?” What does this mean? Rather than argue from first principles, Turing suggested operationalizing the question in the following way:

A person communicates with something by teletype. That something could be another human being or it could be a computer. If the person cannot determine whether or not he is communicating with a computer or a human being, then, according to the “Turing Test” we would have to say that machine is intelligent.

Despite great respect for Turing, I have always had numerous issues with this test. First, suppose the human being was able to easily tell that they were communicating with a computer because the computer knew more, answered more accurately and more quickly than any person could possibly do. (Think Watson and Jeopardy). Does this mean the machine is not intelligent? Would it not make more sense to say it was more intelligent? 

 

 

 

 

 

 

 

 

Second, people are good at many things, but discriminating between “intelligent agents” and randomness is not one of them. Ancient people as well as many modern people ascribe intelligent agency to many things like earthquakes, weather, natural disasters plagues, etc. These are claimed to be signs that God (or the gods) are angry, jealous, warning us, etc. ?? So, personally, I would not put much faith in the general populous being able to make this discrimination accurately.

 

 

 

 

 

 

 

 

 

 

 

Third, why the restriction of using a teletype? Presumably, this is so the human cannot “cheat” and actually see whether they are communicating with a human or a machine. But is this really a reasonable restriction? Suppose I were asked to discriminate whether I were communicating with a potato or a four iron via teletype. I probably couldn’t. Does this imply that we would have to conclude that a four iron has achieved “artificial potatoeness”? The restriction to a teletype only makes sense if we prejudge the issue as to what intelligence is. If we define intelligence purely in terms of the ability to manipulate symbols, then this restriction might make some sense. But is that the sum total of intelligence? Much of what human beings do to survive and thrive does not necessarily require symbols, at least not in any way that can be teletyped. People can do amazing things in the arenas of sports, art, music, dance, etc. without using symbols. After the fact, people can describe some aspects of these activities with symbols.But that does not mean that they are primarily symbolic activities. In terms of the number of neurons and the connectivity of neurons, the human cerebellum (which controls the coordination of movement) is more complex that the cerebrum (part of which deals with symbols).

 

 

 

 

 

 

 

 

 

 

Photo by Tanhauser Vu00e1zquez R. on Pexels.com

Fourth, adequately modeling or simulating something does not mean that the model and the thing are the same. If one were to model the spread of a plague, that could be a very useful model. But no-one would claim that the model was a plague. Similarly, a model of the formation and movement of a tornado could prove useful. But again, even if the model were extremely good, no-one would claim that the model constituted a tornado! Yet, when it comes to artificial intelligence, people seem to believe that if they have a good model of intelligence, they have achieved intelligence.

 

When humans “think” things, there is most often an emotional and subjective component. While we are not conscious of every process that our brain engages in, there is nonetheless, consciousness present during our thinking. This consciousness seems to be a critical part of what it means to have human intelligence. Regardless of what one thinks of the “Turing Test”, per se, there can be no doubt that machines are able to act more accurately and in more domains than they could just a few years ago. Progress in the practical use of machines does not seem to have hit any kind of “wall.”

In the following blog posts, we began exploring some possible scenarios around the concept of “The Singularity.” Like most science fiction, the goal is to explore the ethics and the implications and not to “argue” what will or will not happen. 

 

 

 

 

 

 

 

 

 

 


Turing’s Nightmares is available in paperback and ebook on Amazon. Here is my author page.

A more recent post on AI

One issue with human intelligence is that we often use it to rationalize what we find emotionally appealing though we believe we are using our intelligence to decide. I explore this concept in this post.

 

This post explores how humans use their intelligence to rationalize.

This post shows how one may become addicted to self-destructive lies. A person addicted to heroin, for instance, is also addicted to lies about that addiction. 

This post shows how we may become conned into doing things against our own self-interests. 

 

This post questions whether there are more insidious motives behind the current use of AI beyond making things better for humanity. 

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • April 2026
  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • dogs
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 662 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...