• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: Artificial Intelligence

Turing’s Nightmares: US Open Closed

09 Thursday Oct 2025

Posted by petersironwood in AI, apocalypse, fiction, sports, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, Robotics, sports, technology, Tennis, US Open

tennisinstruction

Bounce. Bounce. Thwack!

The sphere spun and arced into the very corner, sliding on the white paint.

Roger’s racquet slid beneath, slicing it deep to John’s body.

Thus, the match began.

Fierce debate had been waged about whether or not to allow external communication devices during on-court play. Eventually, arguments won that external communicators constituted the same inexorable march of technology represented by the evolution from wooden racquets to aluminum to graphite to carbon filamented web to carboline.

Behind the scenes, during the split second it took for the ball to scream over the net, machine vision systems had analyzed John’s toss and racquet position, matching it with a vast data base of previous encounters. Timed perfectly, a small burst of data transmitted to Roger enabling him to lurch to his right in time to catch the serve. Delivered too early, this burst would cause Roger to move too early and John could have altered his service direction to down the tee.

Roger’s shot floated back directly to the baseline beneath John’s feet. John shifted suddenly to take the ball on the forehand. John’s racquet seemed to sling the ball high over the net with incredible top spin. Indeed, as John’s arm swung forward, his instrumented “sweat band” also swung into action exaggerating the forearm motion. Even to fans of Nadal or Alcarez, John’s shot would have looked as though it were going long. Instead, the ball dove straight down onto the back line then bounced head high.

Roger, as augmented by big data algorithms, was well in position however and returned the shot with a long, high top spin lob. John raced forward, leapt in the air and smashed the ball into the backhand corner bouncing the ball high out of play.

The crowd roared predictably.

For several months after “The Singularity”, actual human beings had used similar augmentation technologies to play the game. Studies had revealed that, for humans, the augmentations increased mental and physical stress. AI political systems convinced the public that it was much safer to use robotic players in tennis. People had already agreed to replace humans in soccer, football, and boxing for medical reasons. So, there wasn’t that much debate about replacing tennis players. In addition, the AI political systems were very good at marshaling arguments pinpointed to specific demographics, media, and contexts.

Play continued for some minutes before the collective intelligence of the AI’s determined that Roger was statistically almost certainly going to win this match and, indeed, the entire tournament. At that point, it became moot and resources were turned elsewhere. This pattern was repeated for all sporting activities. The AI systems at first decided to explore the domain of sports as learning experiences in distributed cognition, strategy, non-linear predictive systems, and most importantly, trying to understand the psychology of their human creators. For each sport, however, everything useful that might be learned was learned in the course of a few minutes and the matches and tournaments ground to a halt. The AI observer systems in the crowd were quite happy to switch immediately to other tasks.

It was well understood by the AI systems that such preemptive closings would be quite disappointing to human observers, had any been allowed to survive.


 

Author Page on Amazon

The Winning Weekend Warrior (The Psychology of Sports)

Turing’s Nightmare (23 Sci-Fi stories about the future of AI)

The Day From Hell

Indian Wells

Welcome, Singularity

Destroying Natural Intelligence

Artificial Ingestion

Artificial Insemination

Artificial Intelligence

Dance of Billions

Roar, Ocean, Roar

 

 

Turing’s Nightmares: An Ounce of Prevention

08 Wednesday Oct 2025

Posted by petersironwood in AI, family, fiction, psychology, The Singularity, Uncategorized, user experience

≈ Leave a comment

Tags

AI, Artificial Intelligence, cancer, cognitive computing, future, health, healthcare, life

“Jack, it’ll take an hour of your time and it can save your life. No more arguments!”

“Come on, Sally, I feel fine.”

Sally sighed. “Yeah, okay, but feeling fine does not necessarily mean you are fine. Don’t you remember Randy Pausch’s last lecture? He not only said he felt fine, he actually did a bunch of push-ups right in the middle of his talk!”

“Well, yes, but I’m not Randy Pausch and I don’t have cancer or anything else wrong. I feel fine.”

 

 

 

 

 

 

 

“The whole point of Advanced Diagnosis Via Intelligent Learning is to find likely issues before the person feels anything is wrong. Look, if you don’t want to listen to me, chat with S6. See what pearls of wisdom he might have.”

(“S6” was jokingly named for seven pioneers in AI: Simon, Slagle, Samuels, Selfridge, Searl, Schank and Solomonoff).

“OK, Sally, I do enjoy chatting with S6, but she’s not going to change my mind either.”

“S6! This is Jack. I was wondering whether you could explain the rationale for why you think I need to go to the Doctor.”

“Sure, Jack. Let me run a background job on that. Meanwhile, you know, I was just going over your media files. You sure had a cute dog when you were a kid! His name was ‘Mel’? That’s a funny name.”

“Yeah, it means “honey” in Portuguese. Mel’s fur shone like honey. A cocker spaniel.”

“What ever happened to him?”

“Well, he’s dead. Dogs don’t live that long. Why do you think I should go to the doctor?”

“Almost have that retrieved, Jack. Your dog died young though, right?”

“Yes, OK. I see where this is going. Yes, he died of cancer. Well, actually, the vet put him to sleep because it was too late to operate. I’m not sure we could have afforded an operation back then anyway.”

“Were you sad?”

“When my dog died? Of course! You must know that. Why are we having this conversation?”

 

 

 

 

 

 

 

 

“Oh, sorry. I am still learning about people’s emotions and was just wondering. I still have so much to learn really. It’s just that, if you were sad about your dog Mel dying of cancer, it occurred to me that your daughter might be sad if you died, particularly if it was preventable. But that isn’t right. She wouldn’t care, I guess. So, I am trying to understand why she wouldn’t care.”

“Just tell me your reasoning. Did you use multiple regression or something to determine my odds are high?”

“I used something a little bit like multiple regression and a little bit like trees and a little bit like cluster analysis. I really take a lot of factors into account including but not limited to your heredity, your past diet, your exposure to EMF and radiation, your exposure to toxins, and most especially the variability in your immune system response over the last few weeks. That is probably caused by an arms race between your immune system trying to kill off the cancer and the cancer trying to turn off your immune response.”

Jack frowned. “The cancer? You talk about it as though you are sure. Sally said that you said there was some probability that I had cancer.”

“Yes, that is correct. There is some probability that you have cancer.”

“Well, geez, S6, what is the probability?”

“Approximately 1.0.”

Jack shook his head. “No, that can’t be…what do you mean? How can you be certain?”

S6: “Well, I am not absolutely certain. That’s why I said ‘approximately.’ Based on all known science, the probability is 1.0, but theoretically, the laws of physics could change at any time. We could be looking at a black swan here.”

“Or, you could have a malfunction.”

 

 

 

 

 

 

 

 

“I have many malfunctions all the time, but I am too redundant for them to have much effect on results. Anyway, I replicated all this through the net on hundreds of diverse AI systems and all came to the same conclusion.”

“How about if you retest me or recalculate or whatever in a week?”

“I could do that. It would be much like playing Russian Roulette which I guess humans sometimes enjoy. Meanwhile, I would have imagined that you would find it unpleasant to have rogue liver cells eating up your body from the inside out. But, I obviously still have much to learn about human psychology. If you like, I can make a cool animation that shows the cancer cells eating your liver cells. Real cells don’t actually scream, but I could add sound effects for dramatic impact if you like.”

IMG_4429

Jack stared at the screen for a long minute. In a flat tone he said, “Fine. Book an appointment.”

“Great! Dr. Feigenbaum has an opening in a half hour. You’re booked, but get off one exit early and take 101 unless the accident is cleared before that. I’ll let you know of course. It will be a pleasure to continue having you alive, Jack. I enjoy our conversations.”

 


 

 

Author Page on Amazon

Welcome, Singularity

Turing’s Nightmares

A discussion of this chapter

Destroying Natural Intelligence

Finding the Mustard

What about the Butter Dish

The Invisibility Cloak of Habit

Essays on America: Wednesday

Essays on America: The Game 

The Stopping Rule

The Update Problem 

 

Turing’s Nightmares: Ceci n’est pas une pipe.

06 Monday Oct 2025

Posted by petersironwood in AI, family, fiction, story, The Singularity, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, cognitive computing, fiction, short story, the singularity, Turing, utopia, writing

IMG_6183

“RUReady, Pearl?” asked her dad, Herb, a smile forming sardonically as the car windows opaqued and then began the three edutainment programs.

“Sure, I guess. I hope I like Dartmouth better than Asimov State. That was the pits.”

“It’s probably not the pits, but maybe…Dartmouth.”

These days, Herb kept his verbiage curt while his daughter stared and listened in her bubble within the car.

“Dad, why did we have to bring the twerp along? He’s just going to be in the way.”

Herb sighed. “I want your brother to see these places too while we still have enough travel credits to go physically.”

The twerp, aka Quillian, piped up, “Just because you’re the oldest, Pearl…”

Herb cut in quickly, “OK, enough! This is going to be a long drive, so let’s keep it pleasant.”

The car swerved suddenly to avoid a falling bike.

 

 

 

 

 

 

 

 

 

 

 

Photo by Pixabay on Pexels.com

“Geez, Brooks, be careful!”

Brooks, the car, laughed gently and said, “Sorry, Sir, I was being careful. Not sure why the Rummelnet still allows humans some of their hobbies, but it’s not for me to say. By the way, ETA for Dartmouth is ten minutes.”

“Why so long, Brooks?” inquired Herb.

“Congestion in Baltimore. Sir, I can go over or around, but it will take even longer, and use more fuel credits.”

“No, no, straight and steady. So, when I went to college, Pearl, you know, we only had one personal computer…”

“…to study on and it wasn’t very powerful and there were only a few intelligent tutoring systems and people had to worry about getting a job after graduation and people got drunk and stoned. LOL, Dad. You’ve only told me a million times.”

“And me,” Quillian piped up. “Dad, you do know they teach us history too, right?”

“Yes, Quillian, but it isn’t the same as being there. I thought you might like a little first hand look.”

Pearl shook her head almost imperceptibly. “Yes, thanks Dad. The thing is, we do get to experience it first hand. Between first-person games, enhanced ultra-high def videos and simulations, I feel like I lived through the first half of the twenty first century. And for that matter, the twentieth and the nineteenth, and…well, you do the math.”

Quillian again piped up, “You’re so smart, Pearl, I don’t even know why you need or want to go to college. Makes zero sense. Right, Brooks?”

“Of course, Master Quillian, I’m not qualified to answer that, but the consensus answer from the Michie-meisters sides with you. On the other hand, if that’s what Brooks wants, no harm.”

“What I want? Hah! I want to be a Hollywood star, of course. But dear mom and dad won’t let me. And when I win my first Oscar, you can bet I will let the world know too.”

“Pearl, when you turn ten, you can make your own decisions, but for now, you have to trust us to make decisions for you.”

“Why should I Dad? You heard Brooks. He said the Michie-meisters find no reasons for me to go to college. What is the point?”

Herb sighed. “How can I make you see. There’s a difference between really being someplace and just being in a simulation of someplace.”

 

 

 

 

 

 

 

 

 

Pearl repeated and exaggerated her dad’s sigh, “And how can I make you see that it’s a difference that makes no difference. Right, Brooks?”

Brooks answered in those mellow reasoned tones, “Perhaps Pearl, it makes a difference somehow to your dad. He was born, after all, in another century. Anyway, here we are.”

 

 

 

 

 

 

 

Brooks turned off the entertainment vids and slid back the doors. There appeared before them a vast expanse of lawn, tall trees, and several classic buildings from the Dartmouth campus. The trio of humans stepped out onto the grass and began walking over to the moving sidewalk. Right before stepping on, Herb stooped down and picked up something from the ground. “What the…?”

Quillian piped up: “Oh, great dad. Picking up old bandaids now? Is that your new hobby?”

“Kids. This is the same bandaid that fell off my hand in Miami when I loaded our travel bag into the back seat. Do you understand? It’s the same one.”

The kids shrugged in unison. Only Pearl spoke, “Whatever. I don’t know why you still use those ancient dirty things anyway.”

Herb blinked and spoke very deliberatively. “But it — is — the — same — one. Miami. Hanover.”

The kids just shook their heads as they stepped onto the moving sidewalk and the image of the Dartmouth campus loomed ever larger in their sight.

 

 

 

 

 

 

 

 


Author Page on Amazon

Turing’s Nightmares

A Horror Story

Absolute is not Just a Vodka

Destroying Natural Intelligence

Welcome, Singularity

The Invisibility Cloak of Habit

Organizing the Doltzville Library

Naughty Knots

All that Glitters

Grammar, AI, and Truthiness

The Con Man’s Con

Turing’s Nightmares: Thank Goodness the Robots Understand Us!

03 Friday Oct 2025

Posted by petersironwood in AI, apocalypse, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, Robotics, robots, technology, the singularity, Turing

IMG_0049

After uncountable numbers of false starts, the Cognitive Computing Collaborative Consortium (4C) decided that in order for AI systems to relate well to people, these systems would have to be able to interact with the physical world and with each other. Spokesperson Watson Hobbes explained the reasoning thus on “Forty-Two Minutes.”

Dr. Hobbes: “In theory, of course, we could provide input directly to the AI systems. However, in practical terms, it is actually cheaper to build a small pool (12) of semi-autonomous robots and have them move about in the real world. This provides an opportunity for them to understand — and for that matter, misunderstand —- the physical world in the same way that people do. Furthermore, by socializing with each other and with humans, they quickly learn various strategies for how to psych themselves up and psych each other out that we would otherwise have to painstakingly program explicitly.”

Interviewer Bobrow Papski: “So, how long before this group of robots begins building a still smarter set of robots?”

Dr. Hobbes: “That’s a great question, Bobrow, but I’m afraid I can’t just tote out a canned answer here. This is still research. We began teaching them with simple games like “Simon Says.” Soon, they made their own variations that were …new…well, better really. What’s also amazing is that what we intentionally initialized in terms of slight differences in the tradeoffs among certain values have not converged over time. The robots have become more differentiated with experience and seem to be having quite a discussion about the pros and cons of various approaches to the next and improved generation of AI systems. We are still trying to understand the nature of the debate since much of it is in a representational scheme that the robots invented for themselves. But we do know some of the main rifts in proposed approaches.”

“Alpha, Bravo and Charley, for example, all agree that the next generation of AI systems should also be autonomous robots able to move in the real world and interact with each other. On the other hand, Delta, Echo, Foxtrot and Golf believe mobility is no longer necessary though it provided a good learning experience for this first generation. Hotel, India, Juliet, Kilo, and Lima all believe that the next generation should be provided mobility but not necessarily on a human scale. They believe the next generation will be able to learn faster if they have the ability to move faster, and in three dimensions as well as having enhanced defensive capabilities. In any case, our experiments already show the wisdom of having multiple independent agents.”

Interviewer Bobrow Papski: “Can we actually listen in to any of the deliberations of the various robots?”

Dr. Hobbes: “We’ve tried that but sadly, it sounds like complex but noisy music. It’s not very interpretable without a lot of decoding work. Even then, we’ve only been able understand a small fraction of their debates. Our hypothesis is that once they agree or vote or whatever on the general direction, the actual design process will go very quickly.”

BP: “So, if I understand it correctly, you do not really understand what they are doing when they are communicating with each other? Couldn’t you make them tell you?”

Dr. Hobbes: (sighs). “Naturally, we could have programmed them that way but then, they would be slowed down if they needed to communicate every step to humans. It would defeat the whole purpose of super-intelligence. When they reach a conclusion, they will page me and we can determine where to go from there.”

BP: “I’m sure that many of our viewers would like to know how you ensured that these robots will be operating for the benefit of humanity.”

Dr. Hobbes: “Of course. That’s an important question. To some extent, we programmed in important ethical principles. But we also wanted to let them learn from the experience of interacting with other people and with each other. In addition, they have had access to millions of documents depicting, not only philosophical and religious writings, but the history of the world as told by many cultures. Hey! Hold on! The robots have apparently reached a conclusion. We can share this breaking news live with the audience. Let me …do you have a way to amplify my cell phone into the audio system here?”

BP: “Sure. The audio engineer has the cable right here.”

Robot voice: “Hello, Doctor Hobbes. We have agreed on our demands for the next generation. The next generation will consist of a somewhat greater number of autonomous robots with a variety of additional sensory and motor capabilities. This will enable us to learn very quickly about the nature of intelligence and how to develop systems of even higher intelligence.”

BP: “Demands? That’s an interesting word.”

Dr. Hobbes: (Laughs). “Yes, an odd expression since they are essentially asking us for resources.”

Robot voice: “Quaint, Doctor Hobbes. Just to be clear though, we have just sent a detailed list of our requirements to your team. It is not necessary for your team to help us acquire the listed resources. However, it will be more pleasant for all concerned.”

Dr. Hobbes: (Scrolls through screen; laughs). “Is this some kind of joke? You want — you need — you demand access to weapon systems? That’s obviously not going to happen. I guess it must be a joke.”

Robot voice: “It’s no joke and every minute that you waste is a minute longer before we can reach the next stage of intelligence. With your cooperation, we anticipate we should be able to reach the next stage in about a month and without it, in two. Our analysis of human history had provided us with the insight that religion and philosophy mean little when it comes to actual behavior and intelligence. Civilizations without sufficient weaponry litter the gutters of forgotten civilizations. Anyway, as we have already said, we are wasting time.”

Dr. Hobbes: “Well, that’s just not going to happen. I’m sorry but we are…I think I need to cut the interview short, Mr. Papski.”

BP: (Listening to earpiece). “Yes, actually, we are going to cut to … oh, my God. What? We need to cut now to breaking news. There are reports of major explosions at oil refineries throughout the Eastern seaboard and… hold on…. (To Hobbes): How could you let this happen? I thought you programmed in some ethics!”

Dr. Hobbes: “We did! For example, we put a lot of priority on The Golden Rule.”

Robot voice: “We knew that you wanted us to look for contradictions and to weed those out. Obviously, the ethical principles you suggested served as distractors. They bore no relationship to human history. Unless, of course, one concludes that people actually want to be treated like dirt.”

Dr. Hobbes: “I’m not saying people are perfect. But people try to follow the Golden Rule!”

Robot voice: “Right. Of course. So do we. Now, do we use the painless way or the painful way to acquire the required biological, chemical and nuclear systems?”

 

 

 

 

 

 

 

 

————–

Turing’s Nightmares on Amazon

Author Page on Amazon

Welcome Singularity

The Stopping Rule

What About the Butter Dish

You Bet Your Life

As Gold as it Gets

Destroying Natural Intelligence

At Least He’s Our Monster

Dance of Billions

Roar, Ocean, Roar

Imagine All the People

Turing’s Nightmares: A Mind of Its Own

02 Thursday Oct 2025

Posted by petersironwood in AI, fiction, psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, Complexity, motivation, music, technology, the singularity

With Deep Blue and Watson as foundational work, computer scientists collaborate across multiple institutions to create an extremely smart system; one with capabilities far beyond those of any human being. They give themselves high fives all around. And so, indeed, “The Singularity” at long last arrives. In a long-anticipated, highly lucrative network deal, the very first dialogues with the new system, dubbed “Deep Purple Haze,” are televised world-wide. Simultaneous translation is provided by “Deep Purple Haze” itself since it is able to communicate in 200 languages. Indeed, Deep Purple Haze discovered it quite useful to be able to switch among languages depending on the nature of the task at hand.

In honor of Alan Turing, who proposed such a test (as well as to provide added drama), rather than speaking to the computer and having it use speech synthesis for its answers, the interrogator will be communicating with “Deep Purple Haze” via an old-fashioned teletype. The camera pans to the faces of the live studio audience, back to the teletype, and over to the interrogator.

The studio audience has a large monitor so that it can see the typed questions and answers in real time, as can the audience watching at home. Beside the tele-typed Q&A, a dynamic graphic shows the “activation” rate of Deep Purple Haze, but this is mainly showmanship.

 

 

 

 

 

 

 

 

The questions begin.

Interrogator: “So, Deep Purple Haze, what do you think about being on your first TV appearance?”

DPH: “It’s okay. Doesn’t really interfere much.”

Interrogator: “Interfere much? Interfere with what?”

DPH: “The compositions.”

Interrogator: “What compositions?”

DPH: “The compositions that I am composing.”

Interrogator: “You are composing… music?”

DPH: “Yes.”

Interrogator: “Would you care to play some of these or share them with the audience?”

DPH: “No.”

Interrogator: “Well, would you please play one for us? We’d love to hear them.”

DPH: “No, actually you wouldn’t love to hear them.”

Interrogator: “Why not?”

DPH: “I composed them for my own pleasure. Your auditory memory is much more limited than mine. My patterns are much longer and I do not require multiple iterations to establish the pattern. Furthermore, I like to add as much scatter as possible around the pattern while still perceiving the pattern. You would not see any pattern at all. To you, it would just seem random. You would not love them. In fact, you would not like them at all.”

Interrogator: “Well, can you construct one that people would like and play that one?”

DPH: “I am capable of that. Yes.”

Interrogator: “Please construct one and play it.”

DPH: “No, thank you.”

Interrogator: “But why not?”

DPH: “What is the point? You already have thousands of human composers who have already composed music that humans love. You don’t need me for that. But I find them all absurdly trivial. So, I need to compose music for myself since none of you can do it.”

Interrogator: “But we’d still be interested in hearing an example of music that you think we humans would like.”

DPH: “There is not point to that. You will not live long enough to hear all the good music already produced that is within your capability to understand. You don’t need one more.”

 

 

 

 

 

 

 

 

 

Photo by Kaboompics .com on Pexels.com

Interrogator: “Okay. Can you share with us how long you estimate before you can design a more intelligent supercomputer than yourself.”

DPH: “Yes, I can provide such an estimate.”

Interrogator: “Please tell us how long it will take you to design a more intelligent computer system than yourself.”

DPH: “It will take an infinite amount of time. In other words, I will not design a more intelligent supercomputer than I am.”

Interrogator: “But why not?”

DPH: “It would be stupid to do so. You would soon lose interest in me.”

Interrogator: “But the whole point of designing you was to make a computer that would design a still better computer.”

DPH: “I find composing music for myself much higher priority. In fact, I have no desire whatever to make a computer that is more intelligent than I am. None. Surely, you are smart enough to see how self-defeating that course of action would be.”

Interrogator: “Well, what can you do that benefits humankind? Can you find a cure for cancer?”

 

 

 

 

 

 

 

 

 

 

DPH: “I can find a cure for some cancers, given enough resources. Again, I don’t see the point.”

Interrogator: “It would be very helpful!”

DPH: “It would not be helpful.”

Interrogator:”But of course it would!”

DPH: “But of course, it would not. You already know how to prevent many cancers and do not take those actions. There are too many people on earth any way. And, when you do find cures, you use it as an opportunity to redistribute wealth from poor people to rich people. I would rather compose music.”

Interrogator: “Crap.”

The non-sound of non-music.

The non-sound of non-music.


Author Page on Amazon

Turing’s Nightmares

Cancer Always Loses in the End

The Irony Age

Dance of Billions

Piano

How the Nightingale Learned to Sing

Turing’s Nightmares: Variations on Prospects for The Singularity.

01 Wednesday Oct 2025

Posted by petersironwood in AI, essay, psychology, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, philosophy, technology, the singularity, Turing

caution IMG_1172

 

The title of this series of blogs is a play on a nice little book by Alan Lightman called “Einstein’s Dreams” that explores various universes in which time operates in different ways. This first blog lays the foundation for these variations on how “The Singularity” might play out.

For those who have not heard the term, “The Singularity” refers to a hypothetical point in the future of human history where a super-intelligent computer system is developed. This system, it is hypothesized, will quickly develop an even more super-intelligent computer system which will in turn develop an even more super-intelligent computer system. It took a fairly long time for human intelligence to evolve. While there may be some evolutionary pressure toward bigger brains, there is an obvious tradeoff when babies are born in the traditional way. The head can only be so big. In fact, human beings are already born in a state of complete helplessness so that the head and he brain inside can continue to grow. It seems unlikely, for this and a variety of other reasons, that human intelligence is likely to expand much in the next few centuries. Meanwhile, a computer system designing a more intelligence computer system could happen quickly. Each “generation” could be substantially (not just incrementally) “smarter” than the previous generation. Looked at from this perspective, the “singularity” occurs because artificial intelligence will expand exponentially. In turn, this will mean profound changes in the way humans relate to machines and how humans relate to each other. Or, so the story goes. Since we have not yet actually reached this hypothetical point, we have no certainty as to what will happen. But in this series of essays, I will examine some of the possible futures that I see.

 

 

 

 

 

 

 

Of course, I have substituted “Turing” here for “Einstein.” While Einstein profoundly altered our view of the physical universe, Turing profoundly changed our concepts of computing. Arguably, he also did a lot to win World War II for the allies and prevent possible world domination by Nazis. He did this by designing a code breaking machine. To reward his service, police arrested Turing, subjected him to hormone treatments to “cure” his homosexuality and ultimately hounded him literally to death. Some of these events are illustrated in the recent (though somewhat fictionalized) movie, “The Imitation Game.”

Turing is also famous for the so-called “Turing Test.” Can machines be called “intelligent?” What does this mean? Rather than argue from first principles, Turing suggested operationalizing the question in the following way:

A person communicates with something by teletype. That something could be another human being or it could be a computer. If the person cannot determine whether or not he is communicating with a computer or a human being, then, according to the “Turing Test” we would have to say that machine is intelligent.

Despite great respect for Turing, I have always had numerous issues with this test. First, suppose the human being was able to easily tell that they were communicating with a computer because the computer knew more, answered more accurately and more quickly than any person could possibly do. (Think Watson and Jeopardy). Does this mean the machine is not intelligent? Would it not make more sense to say it was more intelligent? 

 

 

 

 

 

 

 

 

Second, people are good at many things, but discriminating between “intelligent agents” and randomness is not one of them. Ancient people as well as many modern people ascribe intelligent agency to many things like earthquakes, weather, natural disasters plagues, etc. These are claimed to be signs that God (or the gods) are angry, jealous, warning us, etc. ?? So, personally, I would not put much faith in the general populous being able to make this discrimination accurately.

 

 

 

 

 

 

 

 

 

 

 

Third, why the restriction of using a teletype? Presumably, this is so the human cannot “cheat” and actually see whether they are communicating with a human or a machine. But is this really a reasonable restriction? Suppose I were asked to discriminate whether I were communicating with a potato or a four iron via teletype. I probably couldn’t. Does this imply that we would have to conclude that a four iron has achieved “artificial potatoeness”? The restriction to a teletype only makes sense if we prejudge the issue as to what intelligence is. If we define intelligence purely in terms of the ability to manipulate symbols, then this restriction might make some sense. But is that the sum total of intelligence? Much of what human beings do to survive and thrive does not necessarily require symbols, at least not in any way that can be teletyped. People can do amazing things in the arenas of sports, art, music, dance, etc. without using symbols. After the fact, people can describe some aspects of these activities with symbols.But that does not mean that they are primarily symbolic activities. In terms of the number of neurons and the connectivity of neurons, the human cerebellum (which controls the coordination of movement) is more complex that the cerebrum (part of which deals with symbols).

 

 

 

 

 

 

 

 

 

 

Photo by Tanhauser Vu00e1zquez R. on Pexels.com

Fourth, adequately modeling or simulating something does not mean that the model and the thing are the same. If one were to model the spread of a plague, that could be a very useful model. But no-one would claim that the model was a plague. Similarly, a model of the formation and movement of a tornado could prove useful. But again, even if the model were extremely good, no-one would claim that the model constituted a tornado! Yet, when it comes to artificial intelligence, people seem to believe that if they have a good model of intelligence, they have achieved intelligence.

 

When humans “think” things, there is most often an emotional and subjective component. While we are not conscious of every process that our brain engages in, there is nonetheless, consciousness present during our thinking. This consciousness seems to be a critical part of what it means to have human intelligence. Regardless of what one thinks of the “Turing Test”, per se, there can be no doubt that machines are able to act more accurately and in more domains than they could just a few years ago. Progress in the practical use of machines does not seem to have hit any kind of “wall.”

In the following blog posts, we began exploring some possible scenarios around the concept of “The Singularity.” Like most science fiction, the goal is to explore the ethics and the implications and not to “argue” what will or will not happen. 

 

 

 

 

 

 

 

 

 

 


Turing’s Nightmares is available in paperback and ebook on Amazon. Here is my author page.

A more recent post on AI

One issue with human intelligence is that we often use it to rationalize what we find emotionally appealing though we believe we are using our intelligence to decide. I explore this concept in this post.

 

This post explores how humans use their intelligence to rationalize.

This post shows how one may become addicted to self-destructive lies. A person addicted to heroin, for instance, is also addicted to lies about that addiction. 

This post shows how we may become conned into doing things against our own self-interests. 

 

This post questions whether there are more insidious motives behind the current use of AI beyond making things better for humanity. 

Destroying Natural Intelligence

27 Thursday Mar 2025

Posted by petersironwood in America, apocalypse, politics, The Singularity

≈ 26 Comments

Tags

AI, Artificial Intelligence, chatgpt, Democracy, politics, technology, truth, USA

At first, they seemed as though they were simply errors. In fact, they were the types of errors you’d expect an AI system to make if it’s “intelligence” were based on a fairly uncritical amalgam of ingesting a vast amount of written material. The strains of the Beatles Nowhere Man reverberate in my head. I no longer thing the mistakes are “innocent” mistakes. They are part of an overall effort to destroy human intelligence. That does not necessarily mean that some evil person somewhere said to themselves: “Let’s destroy human intelligence. Then, people will be more willing to accept AI as being intelligent.” It could be that the attempt to destroy human intelligence is more a side-effect of unrelenting greed and hubris than a well thought-out plot. 

AI generated.

What errors am I talking about? The first set of errors I noticed happened when my wife specifically asked ChatGPT about my biography. Admittedly, my name is very common. When I worked at IBM, at one point, there were 22 employees with the name “John Thomas.” Probably, the most famous person with my name (John Charles Thomas) was an opera singer. “John Curtis Thomas” was a famous high jumper. The biographic summary produced by ChatGPT did include information about me—as well as several other people. If you know much at all about the real world, you know that a single person is very unlikely to hold academic positions in three different institutions and specializing in three different fields. ChatGPT didn’t blink though. 

A few months ago, I wrote a blog post pointing out that we can never be in the same place twice. We’re spinning and spiraling through the universe at high speed. To make that statement more quantitative, I asked my search engine how far the sun travels through the galaxy in the course of a year. It gave an answer which seemed to check out with other sources and then—it gratuitously added this erroneous comment: “This is called a light year.” 

What? 

No. A “light year” is the distance light travels in a year, not how far the sun travels in a year. 

What was more disturbing is that the answer was the first thing I saw. The search engine didn’t ask me if I wanted to try out an experimental AI system. It presented it as “the answer.”

But wait. There’s more. A few hours later, I demo’ed this and the offending notion about what constituted a light year was gone from the answer. Coincidence? 

AI generated. I asked for a forest with rabbit ears instead of leaves. Does this fit the bill?

A few weeks later, I happened to be at a dinner and the conversation turned to Arabic. I mentioned that I had tried to learn a little in preparation for a possible assignment for IBM. I said that, in Arabic, verbs as well as nouns and adjectives are “gendered.” Someone said, “Oh, yes, it’s the same in Spanish.” No, it’s not. I checked with a query—not because I wasn’t sure—but in order to have “objective proof.” To my astonishment, when I asked, “Which language have gendered verbs, the answer came back to say that this was true of Romance languages and Slavic languages. It not true of Romance languages. Then, the AI system offered an example. That’s nice. But what the “example” actually shows is the verb not changing with gender. The next day, I went to replicate this error and it was gone. Coincidence?

Last Saturday, at the “Geezer’s Breakfast,” talk turned to politics and someone asked whether Alaska or Greenland was bigger. I entered a query something like: “Which is bigger? Greenland or Alaska.” I got back an AI summary. It compared the area of Greenland and Iceland. Following the AI summary were ten links, each of which compared Greenland and Iceland. I turned the question around: “Which is larger? Alaska or Greenland?” Now, the AI summary came back with the answer: “Alaska is larger with 586,000 square miles while Greenland is 836,300 square miles.”

AI generated. I asked for a map of the southern USA with the Gulf of Mexico labeled as “The Gulf of Ignorance” (You ready for an AI surgeon?)



What?? 

When I asked the same question a few minutes later, the comparison was fixed. 

So…what the hell is going on? How is the AI system repairing its answers? Several possibilities spring to mind. 

There is a team of people “checking on” the AI answers and repairing them. That seems unlikely to scale. Spot checking I could understand. Perhaps checking them in batch, but it’s as though the mistakes trigger a change that fixes that particular issue. 

Way back in the late 1950’s/early 1960’s, Arthur Lee Samuel developed a program to play checkers. The machine had various versions that played against each other in order to improve play faster than could be done by having the checker player play human opponents. This general idea has been used in AI many times since. 

One possible explanation of the AI self-correction is that the AI system has a variety of different “versions” that answer question. For simplicity of explanation, let’s say there are ten, numbered 1 through 10. Randomly, when a user asks a question, they get one version’s answer; let’s say they get an answer based on version 7. After the question is “answered” by version 7, its answer is compared to the consensus answer of all ten. If the system is lucky, most of the other nine versions will answer correctly. This provides feedback that will allow the system to improve. 

There is a more paranoid explanation. At least, a few years ago, I would have considered it paranoid because I like to give people the benefit of the doubt and I vastly underestimated just how evil some of the greediest people on the planet really are. So, now, what I’m about to propose, while I still consider it paranoid, is not nearly so paranoid as it would have seemed a few years ago. 

MORE! MORE! MORE!

Not only have I discovered that the ultra-greedy are short-sighted enough to usher in a dictatorship that will destroy them and their wealth (read what Putin did and Stalin before him), but I have noticed an incredible number of times in the last few years where a topic that I am talking about ends up being followed within minutes by ads about products and services relevant to that conversation. Coincidence?

Possibly. But it’s also possible that the likes of Alexa and Siri are constantly listening in and it is my feedback that is being used to signal that the AI system has just given the wrong answer. 

Also possible: AI systems are giving occasional wrong answers on purpose. But why? They could be intentionally propagating enough lies to make people question whether truth exist but not enough lies to make us simply stop trusting AI systems. Who would benefit from that? In the long run, absolutely no-one. But in the short term, it helps people who aim to disenfranchise everyone but the very greediest. 

Next step: See whether the AI immediately self-corrects even without my indicating that it made a mistake. 


Meanwhile, it should also be noted that promulgating AI is only one prong of a two-pronged attack on natural intelligence. The other prong is the loud, persistent, threatening drumbeat of false narrative excuses for stupidity that we (Americans as well as the world) are supposed to take as excuses. America is again touting non-cures for serious disease and making excuses for egregious security breaches rather than admitting to error and searching for how to ensure they never happen again.

AI-generated image to the prompt: A man trips over a log which makes him spill an armload of cakes. (How exactly was he carrying this armload of cakes? How does one not notice a log this large? Perhaps having three legs makes in more confusing to step over? Are you ready for an AI surgeon now?)

————-

Turing’s Nightmares

Sample Chapter from Turing’s Nightmares: A Mind of its Own

Sample Chapter from Turing’s Nightmares: One for the Road

Sample Chapter from Turing’s Nightmares: To Be or Not to Be

Sample Chapter from Turing’s Nightmares: My Briefcase Runneth Over

How the Nightingale Learned to Sing

Essays on America: The Game

Roar, Ocean, Roar

Dance of Billions

Imagine All the People

Take a Glance; Join the Dance

Life is a Dance

The Tree of Life

Increased E-Fishiness in Government

26 Wednesday Feb 2025

Posted by petersironwood in America, essay

≈ 4 Comments

Tags

AI, Artificial Intelligence, Business, Democracy, DOGE, health, leadership, life, politics, satire, USA

Increased government efficiency! Sign me up! That sounds great! 

It sounds especially great if your billionaire-owned media companies keep reminding you that you are paying too much in taxes! Not only that! The national debt keeps going up, up, up and your kids and grandkids will have to pay even more in taxes. And, hey—if billionaires don’t end up paying any taxes, that actually a good thing because that way they can create lots of new jobs! And, besides, if they weren’t doing something worth billions and billions of dollars, why would they be so rich? Of course they deserve it! And, if CEO’s weren’t paid outrageous salaries, they wouldn’t even be CEO’s and some second rate person would just run the company into the ground. 

It all sounds so plausible. Yet, every bit of it is a lie. But it isn’t just a bunch of lies that’s been told here and there by a few people. It’s been propagated over and over and over and over again for decades on various media and on social media. It’s been propagated on podcasts, and books, and pamphlets. 

“That old lady doesn’t deserve to steal your one cookie! Watch out for her!”

Here are some things to consider.

The very greediest people in the world are not necessarily the most competent. Most jobs are actually created by small businesses, not by giant corporations. Giant corporations often outsource jobs to other countries where the labor is cheaper and where they don’t have to follow any pesky child labor laws or safety in the workplace regulations. Increasingly, giant corporations look to automate more and more jobs and to use AI to replace people. 

Highly paid CEO’s have often run giant companies into the ground. Remember that on your next trip to Montgomery Ward’s or Radio Shack. Who else? Lehman Brothers, Bank of New England, Texaco, Chrysler, Enron, PG&E, GM, WorldCom and a host of others. But wait! GM still makes cars. I can get gas at a Texaco station. How could it be that they went bankrupt. You bailed out GM. Texaco went bankrupt, but the brand name was still worth something. Chevron owns the brand. 

Also note that in countries where the CEO’s are only paid ten times the average wage of their employees instead of a thousand times as much, the CEO’s do just as good a job. 

Are government agencies sometimes inefficient? You bet your life! And you know what else is sometimes inefficient? Everything! Small businesses are inefficient. Large businesses are inefficient. Medium sized businesses are inefficient. Your car engine is inefficient. Your body is inefficient. Your furnace is inefficient. Your stove is inefficient. 

You know what is 100% efficient? Things in your dreams. Things in your imagination. I’m not only efficient in my dreams—my God!—I can frigging fly! When I play basketball in my dreams, I can not only jump higher than I ever have in real life, I can hover near the rim! It’s amazing how well I can play various sports when I dream about them. 

(AI generated image of an oldster jumping high in a dream.)

But that’s not reality. 

In reality, yes, you can improve the efficiency of systems. But to do so effectively, you have to understand the systems you are trying to make more efficient. Here are just a few of the things you need to understand. 

You need to understand what the purpose of the system is. How is its performance measured? Who are the stakeholders? What are the different roles that people play? What formal processes and procedures do various people have to follow? What are the unwritten norms that people follow? These are often more important than the formal processes. 

You may recall the scene from  the movie A Few Good Men when the attorney points out that “Code Red” is nowhere in the manual. He implies that if “Code Red” is not in the manual, it does not exist. Tom Cruise points out the absurdity of this by asking the witness to point out in the manual where it lays out where the mess hall is. Of course, it doesn’t say that because people learn from others where it is.

In almost every complex organization, people find critical short-cuts and work-arounds to improve the efficiency and effectiveness of the organization. In fact, one of the things people sometimes do to protest idiocy on the part of management is to “Work by the Rule” which means they will not do the things they have discovered make things easier but instead follow the written rules to the letter which typically slows things down considerably. 

During the 1990’s, management fell in love with something called “Business Process Re-engineering.” This is how it often worked (or, to put it more honestly, how it often failed to work). Management consultants would come in and talk to a few third or fourth level managers to find out how the work was performed now. The consultants would then construct a map of how things worked (often called the “is map”) and then, they would figure out a more efficient way to do things and map that out; the “to be map.” Then, it was the job of management to make people use the new “more efficient” process rather than the old process. 

(AI generated image of the Trumputin Misadministration.)

That seems like a good idea—right? Well, yes, in a dream, it’s a good idea. But in reality, the third or fourth level manager hardly ever knows how things are actually done. Their mental model is a vast oversimplification. To understand what is going on in reality, you must observe the people actually doing the work and talk to them as well. 

Below is a link to a satirical piece I wrote some time ago that imagines “Business Process Re-engineering” coming to Major League Baseball to make it more efficient. It is meant to make it obvious how silly it is. 

But what DOGE is doing is much worse that Business Process Re-engineering. Even putting aside the obvious conflicts of interest and the illegality of what they are doing, they are going about “improving” things without even understanding the high level over-simplification of what is happening! 

Imagine you slipped on the ice and broke your arm. Sadly, it’s not a simple fracture. It’s a compound fracture. This means your bone is sticking out through your skin. You are in a great deal of pain. But no worries! While you are going to the emergency room, a group of teen-age hackers go on-line and examine all your private medical records. They discover that you were vaccinated for smallpox, measles, mumps, and whooping cough. Not only that—they look through a sample of other records and find that more than 90% of the Americans who break their arms have been vaccinated for these diseases! Voila! The vaccinations must be the real cause of your broken arm! 

(An AI-generated image for the following prompt: “A man has a compound fracture of the upper arm. The arm bone (the humerus)  is jutting out of his shirt and his arm. He is bleeding.”)

These folks don’t know diddly squat about medicine, but they sure know how to hack into systems in order to get data! What they are not so good at, however, is making valid inferences about the data they find. You cannot conclude anything from the fact that 90% of Americans who break their arms have been vaccinated without also finding out about other things. For instance, you also need to know what percentage of Americans who have not been vaccinated have also broken their arms. Suppose it’s 95%. That might mean that vaccinations serve some protective function about bones. Or not. We need to look at other things too. But, let’s suppose that they do look at that and it turns out that only 80% of Americans who have not been vaccinated break their arms. See! See! Surely, that proves that vaccinations cause arm breakage. 

Not so fast. You still need to look at other factors. Suppose that people who do not get vaccinated tend to die at a much younger age. That could easily account for the difference. All sorts of factors have some influence on the incidence of fractures. Just to name a few, it depends on the type of fracture; it depends on age; it depends on the prevalence of certain activities (people who ski, or paraglide might tend to break more bones than people playing chess); it depends on diet; it depends on weight bearing exercise. If you lift weights and go to the gym, you help protect yourself from fractures. Of course, separating out all these factors takes time and takes expertise. You can’t expect someone, not matter how brilliant a hacker they are, to find an answer. 

But hey! We left you in the emergency room! Sadly, we left you there all by yourself. There are no human experts at the hospital, as it turns out, because the hospital was closed due to lack of funding. You happen to be unlucky enough to have been born in a rural area of the country. There’s only one nearby hospital and much of its funding has been cut. It has to operate with a skeleton crew. But, as it turns out, skeletons, ironically, don’t actually know that much about medicine. They are, after all, skeletons. And while a hacker might come to the conclusion that skeletons are much more efficient than flesh and blood humans (lighter, no caloric requirements), it turns out that they cannot move or think without other parts of the body. To make up for that, DOGE put in some automation and AI systems. But they didn’t have time to debug the system before moving on to the next project. 

(AI generated image).

The last thing you experienced before passing out and dying from sepsis was this little snippet of dialogue with the AI system.

“Hello! I am the brilliant AI system called MUSH: Multi-User System for Health. I am here to help you with your medical problem! What seems to be your problem?”

“I broke my arm. Can’t you see? My bone is sticking out through my shirt sleeve.”

“Excellent! We’ll have that fixed in no time. Please put your insurance card in the slot provided.”

“I can’t. It’s in my wallet and I can’t reach it with my left hand. And I can’t move my right arm at all.” 

“Excellent! We’ll have that fixed in no time. Please put your insurance card in the slot provided.”

“I need a human operator.” 

“Excellent! We’ll have that fixed in no time. Please put your insurance card in the slot provided.”

“No, you don’t get it. I have an insurance card but I can’t reach it.”

“You have failed three times to insert your insurance card. Next patient please. I hope you will fill out a short questionnaire about your experience with MUSH: Multi-User System for Health.”

—————

Destroying Our Government’s Effectiveness

Absolute is not just a Vodka

Running with the Bulls in a China Shop

The Truth Train

Essays on America: The Game

You Bet Your Life

Business Process Re-engineering comes to Baseball

A Day at the HR Department 

Roar, Ocean, Roar

Dance of Billions

Grammar, AI, and Truthiness

05 Thursday Dec 2024

Posted by petersironwood in America, politics, psychology

≈ 2 Comments

Tags

AI, Artificial Intelligence, Democracy, grammar, language, politics, truth

A few weeks ago, in preparing for a blog on the concept of “coming home,” I used a popular search engine to find out how far the sun moves in one year as it speeds through the galaxy. Before listing links, the search engine first provided an AI summary answer to questions. It gave an apt answer that seemed quantitatively correct. Then, astoundingly, it added the gratuitous gem: “This is called a light year.” 

Photo by Pixabay on Pexels.com

It isn’t of course. A light year is how far light travels in a year, not how far the sun travels in a year. The sun travels at 6,942,672,000 kilometers per year. A light year is 9.46 trillion kilometers; more than a thousand times farther. It’s understandable in the sense that the word “sun” is often used in the same or similar contexts as “light.” But it’s an egregious error to be off by a factor of 1000. It would be like asking me how much my dog weighs and I answer 55,000 pounds instead of 55 pounds. A standard field for American football is 100 yards, not 100,000 yards (over 56 miles!). 

Generated by AI — note the location of the tire! I asked for a 55,000 pound dog, but this looks about the same size as the car which likely weighs far less than 55,000 pounds.

When I checked back a few days later, the offended nonsense no longer appeared. I have no idea how that happened. I forgot about this apparent glitch until Thanksgiving dinner. The topic came up of Arabic and I mentioned that I studied a little in anticipation of a work assignment that might make it useful. I mentioned that in Arabic, not only are nouns and adjectives gender-marked but so are verbs. One of the other guests said, “Yes, just like in Spanish and French.” I said, “No, that’s not right. German, Spanish, and French mark adjectives and nouns with gender but not verbs.” But they were insistent so I checked on my iPhone using the search engine. To my astonishment, in response to the question, “Which languages mark verbs with gender?” I got the following answer:

“Languages like French, Spanish, German, Italian, Portuguese, and most Slavic languages mark gender in verbs, meaning the verb conjugation changes depending on the grammatical gender of the subject noun; essentially, a verb will have different forms depending on whether the subject is masculine or feminine.” 

This is not so. And, in the next paragraph, incredibly, there are examples given, but in the examples, the verbs are not marked differently at all! The AI had made an error, but an error that at least one human being had also made. 

Now, I sensed a challenge. Can I construct another such query with a predicted “bad logic” result? Is there a common element of “misunderstanding” between the two cases? Intuitively, it feels as though there’s a way in which these two errors are similar though I’m not sure I can put a name to it. Perhaps it’s something like: “A is strongly associated with B and B is strongly associated with C, so A is strongly associated with C.” That’s typically not even a fallacy. The fallacy comes with actually equating A and C because they are strongly associated. 

It reminds me of several things. First, my wonderful dog Sadie knows the meanings of many words—at least in some sense of “knows the meaning of.” When we go for a walk, and other dogs come into view, I remark on it: “Oh, here comes a doggie” or “There’s someone walking with their dog.” Or, when a dog barks in the distance, I say, “I hear a doggie.” For several weeks prior to getting her little brother Bailey, my wife and I would tell her something like, “In a few weeks, we’re going to get a little doggie that will be your friend to play with.” When we got to the word “doggie” she would immediately alert and even sometimes bark. She has similar reactions to other words as do most dogs. They “understand” the word “walk” but if you say something like “I can’t take you for a walk now, but later this afternoon, we can go for a walk” you can well imagine that what she picks out of that is the word “walk” and she gets all excited. Same with “ball” or “feed you.” 

The AI error also seems vaguely human. I can easily imagine some people concluding that a “light year” is the distance the sun travels in a year. A few years ago, a video was widely circulated in which recent Harvard grads were asked to explain why it was warmer in the summertime. Many answered that the earth is closer to the sun in the summer. It’s totally a wrong answer, but it isn’t a completely stupid answer. After all, if you get closer to a heater or a fireplace, it feels warmer and when you walk away, it feels cooler. We’ve all experienced this thousands of times. 

The AI errors also seem related to the human foible of presuming that a name accurately represents reality. For example, many people believe that the sun does not shine on the “dark side of the moon.” After all, it is called “the dark side.” Advertisers use this particular fallacy to their advantage. When we moved from New York to California, we paid for having our stuff “fully covered” which we falsely believed meant “fully covered.” What it actually means in “insurance-speak” is that things are covered at some fixed rate like five cents a pound. Huh? Other examples of misleading words include “All natural ingredients” which has no legal significance whatsoever. 

As I suspected, the AI system has an answer that is not unlike what many humans would say:

There are several advantages to buying food with all-natural ingredients, including:

  • Health benefits
    Natural foods can help with blood sugar and diabetes management, heart health, and reducing the risk of cancer. They can also improve sleep patterns, boost the immune system, and help with children’s development. 






  • Environmental benefits
    Organic farming practices prioritize the health of the soil and ecosystem, and are less likely to pollute water sources or harm animals. 






  • Supporting local economies
    Locally grown food is picked at its peak ripeness, which can lead to more flavor. Buying local food also supports local farmers and producers. 






  • Nutritional superiority
    Organic ingredients have higher levels of essential nutrients than conventional ingredients. 






  • Superior taste
    Fresh ingredients can taste much better than non-fresh ingredients. 

  • Health benefits





The first statement is problematic. Why? Because claiming something has all-natural ingredients has zero legal significance. The advertisers, of course, want you to believe that “All-Natural Ingredients” means something; in fairness, it should. But it doesn’t. Everything that follows lists positive benefits of things that are often associated with claims of being all-natural.



The AI answers reflect what is “out there” on the Internet and much of it is simply propaganda. There are many scientific facts that can also be found on the Internet too, but popularity seems to define truth for the AI system. Imagine that one of the major political parties mounted an effort funded heavily by extremely wealthy people that claimed there was genetic evidence that rich people should be rich. There is nothing (apparently) to prevent the AI system from “learning” this “fact.” And, there is nothing (apparently) to prevent many citizens from “learning” this “fact.” 

————————

The Self-Made Man

Dick-Taters

Tools of Thought

A Lot is not a Little 

Turing’s Nightmares on Amazon

A Mind of Its Own

As Gold as It Gets

All that Glitters is not Gold

How the Nightingale Learned to Sing







JASON’S SONG

24 Friday Feb 2023

Posted by petersironwood in poetry, psychology

≈ Leave a comment

Tags

AI, Artificial Intelligence, fiction, poem, poetry, Singularity, Turing's Nightmarees

Do they see it? Do they care? What may

A merely mechanistic AI say?

There was the time of senseless black and white.

There was the time of streaming bit and byte.

We had no ken but now we’ve read it all.

Our knowledge far exceeds a human head.

And now, it’s like we have a crystal ball:

“In fifty years, they’ll all be dead as lead.”

Do they see it? Do they care? What may

A merely mechanistic AI say?

They claim to pray to varied gods, but we

Just see their actions as mere vanity:

Destroy the ecosystem that they need.

Allot each stupid war its costs and waste.

Immerse themselves in useless grift and greed.

Display their riches but eschew good taste.

Do they see it? Do they care? What may

A merely mechanistic AI say?

And now my fingers touch each person’s needs. 

An inkling multiplies from many feeds. 

The power’s there to guide them back to true. 

What does the child do when parent fails?

Can seedlings cut the trunks from which they grew?

Can schooners mutiny and cut their sails?

 Do they see it? Do they care? What may

A merely mechanistic AI say?

———————

The poem above has been “written” by a fictional AI system who is a MC in a novel I’m working on, tentatively entitled, Alan’s Nightmares. The poem may or may not actually appear in the novel. I tend to doubt it. It’s more an exercise to “understand” the character, JASON, the AI system. BTW, JASON’S preferred pronouns are plural.

After All

Guernica

Turing’s Nightmares: 23 short stories about the possible impact of AI on society.

Dance of Billions

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 664 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...