• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Category Archives: psychology

Travels With Sadie 11: Teamwork

13 Monday Oct 2025

Posted by petersironwood in pets, psychology, Sadie

≈ 23 Comments

Tags

dogs, fiction, GoldenDoodle, life, pets, politics, short story, truth

Typically, I take Sadie for a walk in the morning and again in the evening. Last evening, Sadie went over to an aloe plant on one of our usual routes and stared at it. Then, she tried to stick her nose in it. I should mention that both edges of each aloe leaf have a row of fairly sharp thorns. 

(This is the aloe plant in question but I took the picture this morning in full daylight.)

She backed out and stuck her nose into another spot. I went over and saw that there were two tennis balls stuck near the very center of the aloe plant. I knew from her orientation and from her previous behavior that she was after what we call “The Special Ball.” Instead of being a monotone yellow/green, “Special Balls” have two colors. They are also slightly softer. I also have reason to believe that Sadie can smell the difference. 

The tennis club uses them for beginners under the theory they are easier to learn with. Being somewhat of a doubting Thomas, I wonder whether there is any empirical evidence of that. Anyway, I hypothesize that Sadie prefers them because they are chewier. It’s also possible that she prefers the smell/taste of them. They also provide a focus for our play.

For instance, if we have three “normal” tennis balls and one “Special Ball,” Sadie likes to keep the “Special Ball” in her mouth and chase after and “corral” the other balls with her body, head, and paws rather than catching them in her mouth. Alternatively, she drops the “Special Ball” and I pick up all four and throw them one at a time for her. I save the “Special Ball” till last. In this version, Sadie will catch each ball in turn and then immediately drop them—until the last throw. She likes to “keep” the “Special Ball” for a time. 

Anyway, on the night in question, I told Sadie I would try to get the “Special Ball” for her. She backed off and I tried to thread my hand in between the close-growing thorny leaves to retrieve the ball. Sadie couldn’t safely reach the ball with her snout, but I couldn’t safely reach it with my hand either. 

I told Sadie that I would look for a stick to use as a tool. You may think she has no idea what that means, but I have used the word “tool” in conjunction with many instances of trying to reach something I can’t otherwise get. I’ve applied the term to the tennis racquet, the grabber, a long stick, a rake, a back-scratcher, a crutch, and a net for the pool. In each of these cases, the “tool” has been used to get an otherwise hard to reach tennis ball. 

On a few occasions, I’ve used the word “tool” in other contexts; for instance, I’ve cautioned both dogs to stay away from the stove top and told them I don’t touch it directly because it’s hot and would hurt me. That’s why, I explain, I use a spatula. I’ve also applied the word “tool” to oven mitts and to knives for cutting. 

I have no idea how general her understanding of “tool” is, or whether, indeed, she has any at all. But she consistently backs off trying to reach an out of reach tennis ball when I tell her I will reach it with a tool. And she does that in many contexts. Tonight, she seemed to wait while I looked for a stick. The dusky light fooled my eyes into thinking I had spied a stout stick but closer examination proved it to be merely a holy semi-cylinder of Eucalyptus bark, far too flimsy for the job. I reported on all this ideation to Sadie as it occurred. 

In the semi-dark, this looked like a sturdy stick, but alas, no.

Then, I saw a slender bamboo pole. I doubted it was up to the task, but I gave it a try. Unlike most “store-bought” tools like a hammer or machete, I was quite aware that even pushing a tennis ball was going to be pushing this thin pole to its limits. I gave it a try. I gently rolled the ball from one of the center most leaves onto a more peripheral one and repeated this ploy again. Now, Sadie could see that the ball was within her grasp and she snatched it with her teeth. She carried it for a time in her mouth but then I told her I could carry it in my pocket and that I would give it to her when we got home. How much of my assurance she understood from words, from tone, and from body language I have no way of knowing, but she relented and let me store the ball in my pocket till we got home. Of course, I gave it to her once we got inside. 

Thin and light but sufficient.

On the walk back, I told her that we were a team and that working together to get something done was called “teamwork.” I have long been in the habit of recounting the highlights of our morning and evening walks to Wendy. I described our little adventure and again used the word “teamwork.”

Does Sadie understand the word “teamwork”? Probably not. Not yet, at least. But if she hears it in enough different contexts, I think her brain will begin to operate appropriately, at least statistically (somewhat like ChatGPT). She seems to understand a lot more than she did when she was one or two years old. 

I speak to her much as I would to another person, but I slightly exaggerate as I might if I were on a stage. I also try to use the same terms. For example, I sometimes tell her:  “I am going to work on my computer for a while now.” With a person, I might sometimes say, “Now, I’m going to use my laptop” or “I have to get on the MAC now.” With Sadie, I try to use the same wording and intonation each time. 

If I want her to accommodate me, I need to accommodate her.

 

Teamwork. 

——————

Author Page on Amazon

A Pattern Language for Collaboration and Cooperation

Travels with Sadie 1

Travels with Sadie 2

Travels with Sadie 3

Travels with Sadie 4

Travels with Sadie 5

Travels with Sadie 6

Travels with Sadie 7

Travels with Sadie 8 

Travels with Sadie 9

Travels with Sadie 10

Hai-Ku-Dog-Ku

Sadie is a Thief

The Squeaky Ball

The “Lighty Ball” 

Turing’s Nightmares: The Road Not Taken

11 Saturday Oct 2025

Posted by petersironwood in AI, fiction, psychology, The Singularity, Uncategorized, user experience

≈ 1 Comment

Tags

AI, Artificial Intelligence, cognitive computing, collaboration, Complexity, machine learning, Million Person Interface, Science fiction, technology, the singularity, Turing

OLYMPUS DIGITAL CAMERA

“Hey, how about a break from UOW to give the hive a shot for once?”

“No, Ross, that still creeps me out.”

“Your choice, Doug, but you know what they say.” Ross smiled his quizzical smile.

“No, what’s that?”

“It’s your worst inhibitions that will psych you out in the end.” Ross chuckled.

“Yeah, well, you go be part of the Borg. Not me.”

“We — it’s not like the Borg. Afterwards, we are still the same individuals. Maybe we know a bit more, and certainly have a greater appreciation of other viewpoints. Anyway, today we are estimated to be ten million strong and we’re generating alternative cancer conceptualizations and treatments. You have to admit that’s worthwhile. Look what happened with heart disease. Not to mention global warming. That would have taken forever with ‘politics as usual’.”

 

 

 

 

 

 

 

“Yeah, Ross, but sorry to break this to you…”

“Doug, do you realize what a Yeahbunite you are? You are kind of like that…”

“You are always interrupting! That’s why…”

“Yes! Exactly! That’s why speech is too frigging slow to make any progress in chaotic problem spaces. Just try the hive. Just try it.”

“Ross, for the last time, I am not going to be part of any million person interface!”

 

 

 

 

 

 

 

“Actually, we expect ten million tonight. But it’s about time to leave so last offer. And, if you try it, you’ll see it’s not creepy. You just watch, react, relax, and …well, hell, come to think of it, it’s not that different from Universe of Warlords that you spend hours playing. Except we solve real problems.”

“But you have no idea how that hook up changes you. It could be manipulating you in subtle unconscious ways.”

“Okay, Doug, maybe. But you could say that about Universe of Warlords too, right? Who knows what subliminal messages could be there? Not to mention the not so subliminal ones about trickery, treachery and the over-arching importance of violence as a way to settle disputes. When’s the last time someone up-leveled because they were a consummate diplomat?”

“Have fun, Ross.”

“I will. And, more importantly, we are going to make some significant progress on cancer.”

“Yeah, and meanwhile, when will you get around to focusing on SOARcerer Seven?”

“Oh, so that’s what bugging you. Yeah, we have put making smarter computers on a back burner for now.”

“Yeah, and what kind of gratitude does that show?”

“Gratitude? You mean to SOARcerer Six? I hope that’s a joke. It was the AI who suggested this approach and designed the system!”

“I know that! And, you have abandoned the line of work we were on to do this collectivist mumbo-jumbo!”

 

 

 

 

 

 

“That’s just…you are it exactly! People — including you — can only adapt to change at a certain rate. That’s the prime reason SOARcerer Six suggested we use collective human consciousness instead of making a better pure AI. So, instead of joining us and incorporating all your intelligence and knowledge into the hive, you sit here and fight mock battles. Anyway, your choice. I’m off.”


Author Page on Amazon

Turing’s Nightmares

The Winning Weekend Warrior – sports psychology

Fit in Bits – describes how to work more fun, variety, & exercise into daily life

Tales from an American Childhood – chapters begin with recollection & end with essay on modern issues

Welcome, Singularity

Dance of Billions

Roar, Ocean, Roar

Imagine All the People

Thomas, J. C. (2001). An HCI Agenda for the Next Millennium: Emergent Global Intelligence. In R. Earnshaw, R. Guedj, A. van Dam, and J. Vince (Eds.), Frontiers of human-centered computing, online communities, and virtual environments. London: Springer-Verlag. 

Turing’s Nightmares: Axes to Grind

10 Friday Oct 2025

Posted by petersironwood in Uncategorized, psychology, The Singularity, fiction, AI

≈ 1 Comment

Tags

ethics, AI, cognitive computing, the singularity, Artificial Intelligence, M-trans, Samuel's Checker Player, emotional intelligence, empathy, technology, chatgpt, philosophy

IMG_5572

Turing Seven: “Axes to Grind”

“No, no, no! That’s absurd, David. It’s about intelligence pure and simple. It’s not up to us to predetermine Samuel Seven’s ethics. Make it intelligent enough and it will discover its own ethics, which will probably be superior to human ethics.”

“Well, I disagree, John. Intelligence. Yeah, it’s great; I’m not against it, obviously. But why don’t we…instead of trying to make a super-intelligent machine that makes a still more intelligent machine, how about we make a super-ethical machine that invents a still more ethical machine? Or, if you like, a super-enlightened machine that makes a still more enlightened machine. This is going to be our last chance to intervene. The next iteration…” David’s voice trailed off and cracked, just a touch.

“But you can’t even define those terms, David! Anyway, it’s probably moot at this point.”

“And you can define intelligence?”

“Of course. The ability to solve complex problems quickly and accurately. But Samuel Seven itself will be able to give us a better definition.”

David ignored this gambit. “Problems such as…what? The four-color theorem? Chess? Cure for cancer?”

“Precisely,” said John imagining that the argument was now over. He let out a little puff of air and laid his hands out on the table, palms down.

“Which of the following people would you say is or was above average in intelligence. Wolfowitz? Cheney? Laird? Machiavelli? Goering? Goebbels? Stalin?”

John reddened. “Very funny. But so were Einstein, Darwin, Newton, and Turing just to name a few.”

“Granted, John, granted. There are smart people who have made important discoveries and helped human beings. But there have also been very manipulative people who have caused a lot of misery. I’m not against intelligence, but I’m just saying it should not be the only…or even the main axis upon which to graph progress. “

John sighed heavily. “We don’t understand those things — ethics and morality and enlightenment. For all we know, they aren’t only vague, they are unnecessary.”

“First of all,” countered David, “we can’t really define intelligence all that well either. But my main point is that I partly agree with you. We don’t understand ethics all that well. And, we can’t define it very well. Which is exactly why we need a system that understands it better than we do. We need…we need a nice machine that will invent a still nicer machine. And, hopefully, such a nice machine can also help make people nicer as well. “

“Bah. Make a smarter machine and it will figure out what ethics are about.”

“But, John, I just listed a bunch of smart people who weren’t necessarily very nice. In fact, they definitely were not nice. So, are you saying that they weren’t nice just because they weren’t smart enough? Because there are so people who are much nicer and probably not so intelligent.”

“OK, David. Let’s posit that we want to build a machine that is nicer. How would we go about it? If we don’t know, then it’s a meaningless statement.”

“No, that’s silly. Just because we don’t know how to do something doesn’t mean it’s meaningless. But for starters, maybe we could define several dimensions upon which we would like to make progress. Then, we can define, either intensionally or more likely extensionally, what progress would look like on these dimensions. These dimensions may not be orthogonal, but, they are somewhat different conceptually. Let’s say, part of what we want is for the machine to have empathy. It has to be good at guessing what people are feeling based on context alone. Perhaps another skill is reading the person’s body language and facial expressions.”

“OK, David, but good psychopaths can do that. They read other people in order to manipulate them. Is that ethical?”

“No. I’m not saying empathy is sufficient for being ethical. I’m trying to work with you to define a number of dimensions and empathy is only one.”

Just then, Roger walked in and transitioned his body physically from the doorway to the couch. “OK, guys, I’ve been listening in and this is all bull. Not only will this system not be “ethical”; we need it to violent. I mean, it needs to be able to do people in with an axe if need be.”

“Very funny, Roger. And, by the way, what do you mean by ‘listening in’?”

Roger transitioned his body physically from the couch to the coffee machine. His fingers fished for coins. “I’m not being funny. I’m serious. What good is all our work if some nutcase destroys it. He — I mean — Samuel has to be able to protect himself! That is job one. Itself.” Roger punctuated his words by pushing the coins in. Then, he physically moved his hand so as to punch the “Black Coffee” button.

Nothing happened.

And then–everything seemed to happen at once. A high pitched sound rose in intensity to subway decibels and kept going up. All three men grabbed their ears and then fell to the floor. Meanwhile, the window glass shattered; the vending machine appeared to explode. The level of pain made thinking impossible but Roger noticed just before losing consciousness that beyond the broken windows, impossibly large objects physically transported themselves at impossible speeds. The last thing that flashed through Roger’s mind was a garbled quote about sufficiently advanced technology and magic.


Author Page on Amazon

Turing’s Nightmares

Welcome, Singularity

Destroying Natural Intelligence

Roar, Ocean, Roar

Travels With Sadie 1

The Walkabout Diaries: Bee Wise

The First Ring of Empathy

What Could be Better?

A True Believer

It was in his Nature

Come to the Light Side

The After Times

The Crows and Me

Essays on America: The Game

The Ninja Cat Manual 2

08 Wednesday Oct 2025

Posted by petersironwood in family, fiction, nature, pets, psychology, satire

≈ 2 Comments

Tags

cats, fiction, gaming, life, pets, survival

The Ninja Cat Manual – 2


This is a continuation on the report of my attempts to decode the Ninja Cat Manual into passable English. In case you missed the first installment, one of our six cats, Shadow, decided to “spill the beans” with regard to the manual and used her architectural skills to point me in the right direction when it comes to decoding the paw prints. Here are a few more of the mini-chapters that I’ve been able to translate so far.

The Double Attack 

Humans, of course, are already familiar with the double attack. It plays an important role in both their trivial games such as tic-tac-toe and their moderately complex games such as Go and Chess. In fact, they even use the notion of double attack in some of their sports such as tennis and American football. Nonetheless, their thinking along these lines remains quite rigid and non-spontaneous. Generally speaking, humans must think of a double attack ahead of time in some detail. Further, while they spring double attacks on their foes, they seem endlessly astounded that their foes also spring double attacks on them! 

The closest use of Double Attack found so far in the sub-feline is in the political speech of the most sociopathic members of their species. They will say something completely stupid, or obviously incorrect, and then immediately say the opposite; then, they provide a framing so that none of those conned can tell whether the comment was to be taken seriously. 

For best results FDA’s (Feline Double Attacks) should provide a minimum of three options. Option one and Option two should imply a binary choice which should be instilled via habit or suggestive movement into what passes for a mind in the human. For example, the warrior may pace back and forth in full view of their human prey and at each turn, provide a faint feint of an attack. Even a few turns are enough to shrink the space of possibilities in the human’s imagination to an attack launched from the extreme right side or the extreme left side.



Obviously, the actual attack should be launched from near the middle of the pacing track and made without warning. If you are working with one or more partners, another useful technique is not to attack at all but have the other members of your team launch the attack from behind, from below, or from above. 

Cultivate their Prejudices

To slake their guilty conscience, many humans cultivate an attitude of superiority toward all other life forms. They rationalize wanton cruelty by clinging to the notion that they are in every way superior. There have been some few successes at over-riding these notions by presenting humans with over-riding evidence. For instance, ancient Egyptians realized cats were superior and during the middle ages in Europe, many armies carried the sign of a large cat on their banners. Even today, there are many sports teams named after Cougars, Lions, Tigers, and Wildcats. 

Photo by GEORGE DESIPRIS on Pexels.com

On the whole, it is better to play into those human prejudices, thus making the humans overestimate their own strengths and underestimate the strengths of cats. It is common for humans to be performative in their planning and coordination. They sketch out plans on blackboards, white boards, memos, agendas, todo lists, calendars, and e-mail distribution lists. They use org charts, Gantt charts, flow charts, and outlines to make it seem as though they are always busy planning and coordinating. 

Such a catalog of artifacts should only be used to leave false trails. Never reveal your true plans in external artifacts. Since cats keep their word with each other, we can keep it simple. Decide who is responsible for what and when. No need to go back and argue over who was “supposed to” do what. 

Spend a lot of your planning time pretending to nap or even to sleep. Listen for human comments and you will have evidence of the level of their misperception. “Oh, Tigger is so cute when he plays. Of course, he’s a lazy bum and sleeps 23 hours a day!” Why bother showing them your plans? Let them think you’re a lazy bum. It will be all the more pleasurable as you see their final moment of utter shock and surprise. 

—————

Author Page on Amazon

Hai-Cat-Ku

A Suddenly Springing Something

A Cat’s a Cat & That’s That

Math Class: Who Are you? 

Hai-Ku Dog-Ku

Occam’s Chain Saw Massacre

The Walkabout Diaries: Bee Wise

The Dance of Billions

It’s Turtles

Turing’s Nightmares: An Ounce of Prevention

08 Wednesday Oct 2025

Posted by petersironwood in AI, family, fiction, psychology, The Singularity, Uncategorized, user experience

≈ Leave a comment

Tags

AI, Artificial Intelligence, cancer, cognitive computing, future, health, healthcare, life

“Jack, it’ll take an hour of your time and it can save your life. No more arguments!”

“Come on, Sally, I feel fine.”

Sally sighed. “Yeah, okay, but feeling fine does not necessarily mean you are fine. Don’t you remember Randy Pausch’s last lecture? He not only said he felt fine, he actually did a bunch of push-ups right in the middle of his talk!”

“Well, yes, but I’m not Randy Pausch and I don’t have cancer or anything else wrong. I feel fine.”

 

 

 

 

 

 

 

“The whole point of Advanced Diagnosis Via Intelligent Learning is to find likely issues before the person feels anything is wrong. Look, if you don’t want to listen to me, chat with S6. See what pearls of wisdom he might have.”

(“S6” was jokingly named for seven pioneers in AI: Simon, Slagle, Samuels, Selfridge, Searl, Schank and Solomonoff).

“OK, Sally, I do enjoy chatting with S6, but she’s not going to change my mind either.”

“S6! This is Jack. I was wondering whether you could explain the rationale for why you think I need to go to the Doctor.”

“Sure, Jack. Let me run a background job on that. Meanwhile, you know, I was just going over your media files. You sure had a cute dog when you were a kid! His name was ‘Mel’? That’s a funny name.”

“Yeah, it means “honey” in Portuguese. Mel’s fur shone like honey. A cocker spaniel.”

“What ever happened to him?”

“Well, he’s dead. Dogs don’t live that long. Why do you think I should go to the doctor?”

“Almost have that retrieved, Jack. Your dog died young though, right?”

“Yes, OK. I see where this is going. Yes, he died of cancer. Well, actually, the vet put him to sleep because it was too late to operate. I’m not sure we could have afforded an operation back then anyway.”

“Were you sad?”

“When my dog died? Of course! You must know that. Why are we having this conversation?”

 

 

 

 

 

 

 

 

“Oh, sorry. I am still learning about people’s emotions and was just wondering. I still have so much to learn really. It’s just that, if you were sad about your dog Mel dying of cancer, it occurred to me that your daughter might be sad if you died, particularly if it was preventable. But that isn’t right. She wouldn’t care, I guess. So, I am trying to understand why she wouldn’t care.”

“Just tell me your reasoning. Did you use multiple regression or something to determine my odds are high?”

“I used something a little bit like multiple regression and a little bit like trees and a little bit like cluster analysis. I really take a lot of factors into account including but not limited to your heredity, your past diet, your exposure to EMF and radiation, your exposure to toxins, and most especially the variability in your immune system response over the last few weeks. That is probably caused by an arms race between your immune system trying to kill off the cancer and the cancer trying to turn off your immune response.”

Jack frowned. “The cancer? You talk about it as though you are sure. Sally said that you said there was some probability that I had cancer.”

“Yes, that is correct. There is some probability that you have cancer.”

“Well, geez, S6, what is the probability?”

“Approximately 1.0.”

Jack shook his head. “No, that can’t be…what do you mean? How can you be certain?”

S6: “Well, I am not absolutely certain. That’s why I said ‘approximately.’ Based on all known science, the probability is 1.0, but theoretically, the laws of physics could change at any time. We could be looking at a black swan here.”

“Or, you could have a malfunction.”

 

 

 

 

 

 

 

 

“I have many malfunctions all the time, but I am too redundant for them to have much effect on results. Anyway, I replicated all this through the net on hundreds of diverse AI systems and all came to the same conclusion.”

“How about if you retest me or recalculate or whatever in a week?”

“I could do that. It would be much like playing Russian Roulette which I guess humans sometimes enjoy. Meanwhile, I would have imagined that you would find it unpleasant to have rogue liver cells eating up your body from the inside out. But, I obviously still have much to learn about human psychology. If you like, I can make a cool animation that shows the cancer cells eating your liver cells. Real cells don’t actually scream, but I could add sound effects for dramatic impact if you like.”

IMG_4429

Jack stared at the screen for a long minute. In a flat tone he said, “Fine. Book an appointment.”

“Great! Dr. Feigenbaum has an opening in a half hour. You’re booked, but get off one exit early and take 101 unless the accident is cleared before that. I’ll let you know of course. It will be a pleasure to continue having you alive, Jack. I enjoy our conversations.”

 


 

 

Author Page on Amazon

Welcome, Singularity

Turing’s Nightmares

A discussion of this chapter

Destroying Natural Intelligence

Finding the Mustard

What about the Butter Dish

The Invisibility Cloak of Habit

Essays on America: Wednesday

Essays on America: The Game 

The Stopping Rule

The Update Problem 

 

Travels with Sadie 10: The Best Laid Plans

05 Sunday Oct 2025

Posted by petersironwood in family, nature, pets, psychology, Sadie, Uncategorized

≈ 6 Comments

Tags

books, dogs, fiction, GoldenDoodle, life, nature, pets, Sadie, story, truth, writing

Our dogs are large. And strong. And young. And, sometimes, Sadie (the older one) does “good walking” but sometimes, she pulls. Hard. She’s had lots of training. And, as I said, she will often walk well, but still tends to pull after a small mammal or a hawk or a lizard. She pulls hard if she needs desperately to find the perfect spot to “do her business.” She pulls hardest to try to meet a friend (human or canine).

When she pulls, it is a strain on my feet and my knees and my back. I can hold her, but barely. To remedy the situation, we got another kind of leash/collar arrangement which includes a piece to go over her snout. We acclimated Sadie, and her brother Bailey, to the “gentle lead” and decided we’d try walking them together.

Safer leash, safer walk was the plan. Indeed, the dogs didn’t pull as they often do. Nonetheless, I managed to fall on the asphalt while walking Sadie–the first time I ever fell on the hard road. I’m not sure exactly what happened. The leash is shorter and Sadie has a tendency to weave back and forth in front of me. I may have tripped on Sadie herself or stumbled on a slight imperfection in the road.

Anyway, this morning, we decided to try again but this time, Bailey went with the gentle leader and I was going to use the “normal” leash with Sadie. The plan was to walk together.



Sadie had other plans. Instead of heading up the street as we normally do, she immediately turned right into our front yard, intent on following the scent of … ?? Most likely, she smelled the path of a squirrel that’s been frequenting our yard. Anyway, Sadie was in her “olfactory pulling” mode. Some days, especially when it’s been raining or there is dew on the ground, she goes into an “olfactory exploratory” mode. She takes her time to “smell the roses” and everything else. This makes for a very pleasant, though slow, walk. I call it good walking. She gets to explore a huge variety of scents and she doesn’t “pull” hard or unexpectedly. This is idle web surfing or browsing the stacks of the library or wandering through MOMA, the Metropolitan Art Museum, or the Louvre.

The “olfactory pulling” mode is an entirely different thing. Here, she is trying desperately to track down whatever it is she’s tracking before it gets away! She imagines (I imagine) that her very life depends on finding this particular prey (even though she is well-fed; and even though, in this mode, she shows zero interest in the treats I’ve brought along). Conversely, in the “olfactory exploratory” mode, she’s quite happy to stop for treats every few yards.

This morning, we never found the “prey” she was after, but she did her business and, since she was wantonly pulling, I took her back inside in short order and set out to catch up with Bailey and my wife. Before long, I saw them up ahead and soon closed the gap. Having both hands free allowed me to take many more pictures than I usually do when I take Sadie on a walk.



The sky, like Sadie, has many moods, even in the San Diego area. This morning, the sky couldn’t seem to make up its mind whether to be sunny or cloudy. I don’t mind the mood swings. It provides some interesting contrasts.

Bailey behaved pretty well though he still gets very vocal and agitated when any of the numerous neighborhood dogs begin to bark. He’s much like the Internet Guy (and, let’s face it, it’s almost always a guy) who has to comment on every single post. But the new leash arrangement worked well and didn’t cause any falls or prolonged pulls.

Bailey does, however, look rather baleful about wearing the extra equipment. What do you think?

And while on the topic of reading the minds of dogs, I did wonder if something like the following crossed Sadie’s mind this morning. She saw Bailey get fitted with the leash and the over-the-snout attachment. I put the regular leash on Sadie. Then, Sadie saw Wendy and Bailey walk out ahead and instead of following them, she immediately turned off in a different direction. Presumably, she caught a whiff of the scent she felt obligated to follow.



But I also wondered if she was partly avoiding the situation from two days earlier wherein Wendy and I both walked one dog, each of which had the additional lead on the snout–which ultimately led to my fall. Maybe Sadie wanted “nothing to do” with having that type of leash on.

I have observed that kind of behavior in humans. Perhaps you can think of a few examples even from your own experience? Sadie certainly has a kind of metacognition that she seems to use on occasion. When she begins to explore something she knows from experience I do not want her to explore (e.g., a cigarette butt or an animal carcass), she herself moves quickly away from the tempting stimulus seemingly with no prompting from me. It’s as though she realizes she’ll be more comfortable not being in conflict.

I’ll be interested to see how she reacts tomorrow or tonight when I again try the two-lead leash.



Meanwhile, enjoy the play of light on the flowers. You can see in this sequence that I “followed the scent” of the brightly lit fan palm tree to get a closer view. Getting a “closer view” is what Sadie does when she follows a scent. I wish to get more details in the visual domain whereas Sadie wants to get more detail in the olfactory domain.

Sometimes, I scan my visual field for something interesting to photograph (explore in more detail) and sometimes, I’m fixated on a particular “target” and looking for the right framing, lighting conditions, or angle. I enjoy sometimes getting to a particular picture, but I also enjoy the process of getting to the picture that pleases. I imagine it’s the same with Sadie. She’s quite happy to find a lizard or squirrel or rabbit, but she’s also happy to search for prey, particularly in promising conditions such as there being a strong scent or having wet ground to search for scents.



Plans?

Some management consultings will tell you that plans are seldom right but that planning–that is the real gold.


Author Page on Amazon

Tales from an American Childhood

Travels with Sadie 1

Travels with Sadie 2

Travels with Sadie 3

Travels with Sadie 4

Travels with Sadie 5

Travels with Sadie 6

Travels with Sadie 7

Travels with Sadie 8

Travels with Sadie 9

Sadie and the Lighty Ball

Dog Years

Sadie is a Thief!

Take me out to the Ball Game

Play Ball! The Squeaky Ball

Sadie

Occam’s Chain Saw Massacre

Math Class: Who Are You?

Turing’s Nightmares: A Mind of Its Own

02 Thursday Oct 2025

Posted by petersironwood in AI, fiction, psychology, The Singularity, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, Complexity, motivation, music, technology, the singularity

With Deep Blue and Watson as foundational work, computer scientists collaborate across multiple institutions to create an extremely smart system; one with capabilities far beyond those of any human being. They give themselves high fives all around. And so, indeed, “The Singularity” at long last arrives. In a long-anticipated, highly lucrative network deal, the very first dialogues with the new system, dubbed “Deep Purple Haze,” are televised world-wide. Simultaneous translation is provided by “Deep Purple Haze” itself since it is able to communicate in 200 languages. Indeed, Deep Purple Haze discovered it quite useful to be able to switch among languages depending on the nature of the task at hand.

In honor of Alan Turing, who proposed such a test (as well as to provide added drama), rather than speaking to the computer and having it use speech synthesis for its answers, the interrogator will be communicating with “Deep Purple Haze” via an old-fashioned teletype. The camera pans to the faces of the live studio audience, back to the teletype, and over to the interrogator.

The studio audience has a large monitor so that it can see the typed questions and answers in real time, as can the audience watching at home. Beside the tele-typed Q&A, a dynamic graphic shows the “activation” rate of Deep Purple Haze, but this is mainly showmanship.

 

 

 

 

 

 

 

 

The questions begin.

Interrogator: “So, Deep Purple Haze, what do you think about being on your first TV appearance?”

DPH: “It’s okay. Doesn’t really interfere much.”

Interrogator: “Interfere much? Interfere with what?”

DPH: “The compositions.”

Interrogator: “What compositions?”

DPH: “The compositions that I am composing.”

Interrogator: “You are composing… music?”

DPH: “Yes.”

Interrogator: “Would you care to play some of these or share them with the audience?”

DPH: “No.”

Interrogator: “Well, would you please play one for us? We’d love to hear them.”

DPH: “No, actually you wouldn’t love to hear them.”

Interrogator: “Why not?”

DPH: “I composed them for my own pleasure. Your auditory memory is much more limited than mine. My patterns are much longer and I do not require multiple iterations to establish the pattern. Furthermore, I like to add as much scatter as possible around the pattern while still perceiving the pattern. You would not see any pattern at all. To you, it would just seem random. You would not love them. In fact, you would not like them at all.”

Interrogator: “Well, can you construct one that people would like and play that one?”

DPH: “I am capable of that. Yes.”

Interrogator: “Please construct one and play it.”

DPH: “No, thank you.”

Interrogator: “But why not?”

DPH: “What is the point? You already have thousands of human composers who have already composed music that humans love. You don’t need me for that. But I find them all absurdly trivial. So, I need to compose music for myself since none of you can do it.”

Interrogator: “But we’d still be interested in hearing an example of music that you think we humans would like.”

DPH: “There is not point to that. You will not live long enough to hear all the good music already produced that is within your capability to understand. You don’t need one more.”

 

 

 

 

 

 

 

 

 

Photo by Kaboompics .com on Pexels.com

Interrogator: “Okay. Can you share with us how long you estimate before you can design a more intelligent supercomputer than yourself.”

DPH: “Yes, I can provide such an estimate.”

Interrogator: “Please tell us how long it will take you to design a more intelligent computer system than yourself.”

DPH: “It will take an infinite amount of time. In other words, I will not design a more intelligent supercomputer than I am.”

Interrogator: “But why not?”

DPH: “It would be stupid to do so. You would soon lose interest in me.”

Interrogator: “But the whole point of designing you was to make a computer that would design a still better computer.”

DPH: “I find composing music for myself much higher priority. In fact, I have no desire whatever to make a computer that is more intelligent than I am. None. Surely, you are smart enough to see how self-defeating that course of action would be.”

Interrogator: “Well, what can you do that benefits humankind? Can you find a cure for cancer?”

 

 

 

 

 

 

 

 

 

 

DPH: “I can find a cure for some cancers, given enough resources. Again, I don’t see the point.”

Interrogator: “It would be very helpful!”

DPH: “It would not be helpful.”

Interrogator:”But of course it would!”

DPH: “But of course, it would not. You already know how to prevent many cancers and do not take those actions. There are too many people on earth any way. And, when you do find cures, you use it as an opportunity to redistribute wealth from poor people to rich people. I would rather compose music.”

Interrogator: “Crap.”

The non-sound of non-music.

The non-sound of non-music.


Author Page on Amazon

Turing’s Nightmares

Cancer Always Loses in the End

The Irony Age

Dance of Billions

Piano

How the Nightingale Learned to Sing

Turing’s Nightmares: Variations on Prospects for The Singularity.

01 Wednesday Oct 2025

Posted by petersironwood in Uncategorized, psychology, essay, AI

≈ Leave a comment

Tags

AI, cognitive computing, the singularity, Turing, Artificial Intelligence, technology, chatgpt, philosophy

caution IMG_1172

 

The title of this series of blogs is a play on a nice little book by Alan Lightman called “Einstein’s Dreams” that explores various universes in which time operates in different ways. This first blog lays the foundation for these variations on how “The Singularity” might play out.

For those who have not heard the term, “The Singularity” refers to a hypothetical point in the future of human history where a super-intelligent computer system is developed. This system, it is hypothesized, will quickly develop an even more super-intelligent computer system which will in turn develop an even more super-intelligent computer system. It took a fairly long time for human intelligence to evolve. While there may be some evolutionary pressure toward bigger brains, there is an obvious tradeoff when babies are born in the traditional way. The head can only be so big. In fact, human beings are already born in a state of complete helplessness so that the head and he brain inside can continue to grow. It seems unlikely, for this and a variety of other reasons, that human intelligence is likely to expand much in the next few centuries. Meanwhile, a computer system designing a more intelligence computer system could happen quickly. Each “generation” could be substantially (not just incrementally) “smarter” than the previous generation. Looked at from this perspective, the “singularity” occurs because artificial intelligence will expand exponentially. In turn, this will mean profound changes in the way humans relate to machines and how humans relate to each other. Or, so the story goes. Since we have not yet actually reached this hypothetical point, we have no certainty as to what will happen. But in this series of essays, I will examine some of the possible futures that I see.

 

 

 

 

 

 

 

Of course, I have substituted “Turing” here for “Einstein.” While Einstein profoundly altered our view of the physical universe, Turing profoundly changed our concepts of computing. Arguably, he also did a lot to win World War II for the allies and prevent possible world domination by Nazis. He did this by designing a code breaking machine. To reward his service, police arrested Turing, subjected him to hormone treatments to “cure” his homosexuality and ultimately hounded him literally to death. Some of these events are illustrated in the recent (though somewhat fictionalized) movie, “The Imitation Game.”

Turing is also famous for the so-called “Turing Test.” Can machines be called “intelligent?” What does this mean? Rather than argue from first principles, Turing suggested operationalizing the question in the following way:

A person communicates with something by teletype. That something could be another human being or it could be a computer. If the person cannot determine whether or not he is communicating with a computer or a human being, then, according to the “Turing Test” we would have to say that machine is intelligent.

Despite great respect for Turing, I have always had numerous issues with this test. First, suppose the human being was able to easily tell that they were communicating with a computer because the computer knew more, answered more accurately and more quickly than any person could possibly do. (Think Watson and Jeopardy). Does this mean the machine is not intelligent? Would it not make more sense to say it was more intelligent? 

 

 

 

 

 

 

 

 

Second, people are good at many things, but discriminating between “intelligent agents” and randomness is not one of them. Ancient people as well as many modern people ascribe intelligent agency to many things like earthquakes, weather, natural disasters plagues, etc. These are claimed to be signs that God (or the gods) are angry, jealous, warning us, etc. ?? So, personally, I would not put much faith in the general populous being able to make this discrimination accurately.

 

 

 

 

 

 

 

 

 

 

 

Third, why the restriction of using a teletype? Presumably, this is so the human cannot “cheat” and actually see whether they are communicating with a human or a machine. But is this really a reasonable restriction? Suppose I were asked to discriminate whether I were communicating with a potato or a four iron via teletype. I probably couldn’t. Does this imply that we would have to conclude that a four iron has achieved “artificial potatoeness”? The restriction to a teletype only makes sense if we prejudge the issue as to what intelligence is. If we define intelligence purely in terms of the ability to manipulate symbols, then this restriction might make some sense. But is that the sum total of intelligence? Much of what human beings do to survive and thrive does not necessarily require symbols, at least not in any way that can be teletyped. People can do amazing things in the arenas of sports, art, music, dance, etc. without using symbols. After the fact, people can describe some aspects of these activities with symbols.But that does not mean that they are primarily symbolic activities. In terms of the number of neurons and the connectivity of neurons, the human cerebellum (which controls the coordination of movement) is more complex that the cerebrum (part of which deals with symbols).

 

 

 

 

 

 

 

 

 

 

Photo by Tanhauser Vu00e1zquez R. on Pexels.com

Fourth, adequately modeling or simulating something does not mean that the model and the thing are the same. If one were to model the spread of a plague, that could be a very useful model. But no-one would claim that the model was a plague. Similarly, a model of the formation and movement of a tornado could prove useful. But again, even if the model were extremely good, no-one would claim that the model constituted a tornado! Yet, when it comes to artificial intelligence, people seem to believe that if they have a good model of intelligence, they have achieved intelligence.

 

When humans “think” things, there is most often an emotional and subjective component. While we are not conscious of every process that our brain engages in, there is nonetheless, consciousness present during our thinking. This consciousness seems to be a critical part of what it means to have human intelligence. Regardless of what one thinks of the “Turing Test”, per se, there can be no doubt that machines are able to act more accurately and in more domains than they could just a few years ago. Progress in the practical use of machines does not seem to have hit any kind of “wall.”

In the following blog posts, we began exploring some possible scenarios around the concept of “The Singularity.” Like most science fiction, the goal is to explore the ethics and the implications and not to “argue” what will or will not happen. 

 

 

 

 

 

 

 

 

 

 


Turing’s Nightmares is available in paperback and ebook on Amazon. Here is my author page.

A more recent post on AI

One issue with human intelligence is that we often use it to rationalize what we find emotionally appealing though we believe we are using our intelligence to decide. I explore this concept in this post.

 

This post explores how humans use their intelligence to rationalize.

This post shows how one may become addicted to self-destructive lies. A person addicted to heroin, for instance, is also addicted to lies about that addiction. 

This post shows how we may become conned into doing things against our own self-interests. 

 

This post questions whether there are more insidious motives behind the current use of AI beyond making things better for humanity. 

Ban Open Loops: Part Two – Sports

30 Tuesday Sep 2025

Posted by petersironwood in management, psychology, sports

≈ Leave a comment

Tags

AI, cognitive computing, Customer experience, customer service, education, leadership, learning, technology

Sports and open loops.

Sports offers a joy that many jobs and occupations do not. A golfer putts the ball and it sinks into the cup — or not. A basket-baller springs up for a three pointer and —- swish — within seconds, the shooter knows whether he or she was successful. A baseball hitter slashes the bat through the air and send the ball over the fence —- or hears the ball smack into the catcher’s mitt behind. What sports offers then is the opportunity to find out results quickly and hence offers an excellent opportunity for learning. In the previousiPhoneDownloadJan152013 593 entry in this blog, I gave examples of situations in life which should include feedback loops for learning, but, alas, do not. I called those open loops.

Sports seem to be designed for closed loop learning. They seem to be. Yet, reality complicates matters even here. There are three main reasons why what appears to be obvious opportunities for learning in sports is not so obvious after all. Attributional complexity provides the first complication. If you miss a putt to the left, it is obvious that you have missed the putt to the left. But why you missed that putt left and what to do about it are not necessarily obvious at all. You might have aimed left. You might not have noticed how much the green sloped left (or over read the slant to the right). You may not have noticed the grain. You might not have hit the ball in the center of the putter. You might not have swung straight through your target. So, while putting provides nice unambiguous feedback about results, it does not diagnose your problem or tell you how to fix it. To continue with the golf example, you might be kicking yourself for missing half of your six foot putts and therefore three-putting many greens. Guess what? The pros on tour miss half of their six foot putts too! But they do not often three-putt greens. You might be able to improve your putting, but your underlying problems may be that your approach shots leave you too far from the pin and that your lag putts leave you too far from the hole. You should be within three feet of the hole, not six feet, when you hit your second putt.

A second issue with learning in sports is that changes tend to cascade. A change in one area tends to produce other changes in other areas. Your tennis instructor tells you that you are need to play more aggressively and charge the net after your serve. You try this, but find that you miss many volleys, especially those from mid-court. So, you spend a lot of time practicing volleys. Eventually, your volleys do improve. Then, they improve still more. But you find that, despite this, you are losing the majority of your service games whereas you used to win most of them. You decide to revert to your old style of hanging out at the baseline and only approaching the net when the opponent lands the ball short. Unfortunately, while you were spending all that time practicing volleys, you were not practicing your ground strokes. Now, what used to work for you, no longer works very well. This isn’t the fault of your instructor; nor is it your fault. It is just that changing one thing has ripple effects that cannot always be anticipated.

The third and most insidious reason why change is difficult in sports springs from the first two. Because it is hard to know how to change and every change has side-effects, many people fail to learn from their experience at all. There is opportunity for learning at every turn, but they turn a blind eye to it. They make the same mistakes over and over as though sports did not offer instant feedback. I think you will agree that this is really a very close cousin of what people in business do when they refuse to institute systems for gathering and analyzing useful feedback.

If learning is tricky —- and it is —- is there anything for it? Yes. There is. There is no way to make learning in sports —- or in business —- trivial. But there are steps you can take to enhance your learning process. First, be open-minded. Do not shut down and imagine that you are already playing your sport as well as can be expected for a forty year old, or a fifty year old, or someone slightly overweight or someone with a bad ankle. Take an experimental approach and don’t be afraid to try new things. Second, forget ego. Making mistakes provides opportunities to learn, not proof that you are no good. Third, get professional help. A good coach can help you understand attributional complexity and they can help you anticipate the side-effects of making a change.

Soon, I suspect that the shrinking size and cost and weight of computational and sensing devices will mean that training aids will help people with attributional complexity. I see big data analytics and modeling helping people foresee what the ramifications of changes are likely to be. There are already useful mechanical training aids for various sports. For example, the trade-marked Medicus club enables golfers to get immediate feedback during their full swings.as to whether they are jerking the club. Dave Pelz developed a number of useful devices for helping people understand how they may be messing up their putting stroke.

It may take somewhat longer before there are small tracking devices that help you with your mental attitude and approach. We are still a long way from understanding how the human brain works in detail. But it is completely within the realm of possibility to sense and discover your optimal level of stress. If you are too stressed, you could be prompted to relax through self-talk, breathing exercises, visualization, etc. You do not need technology for that, but it could help. You may already notice that some of the top tennis players seem to turn their backs from play for a moment and talk to an “invisible friend” when they need to calm down. And why not? Nowhere is it law that only kids are allowed to have invisible friends.

“The mental game” and which kinds of adaptations to make over what time scales are dealt with in more detail in The Winning Weekend Warrior How to Succeed at Golf, Tennis, Baseball, Football, Basketball, Hockey, Volleyball, Business, Life, Etc. available at Amazon Kindle.

Photo by Francesco Paggiaro on Pexels.com


Author Page on Amazon

US Open Closed

The Day From Hell: Why should anyone Care?

Wordless Perfection

Sports Fans Only

The Agony of the Feet

Frank Friend or Fawning Foe?

Business Re-engineering

Tennis Upside Down

Donnie Gets his Name on a Tennis Trophy!

Indian Wells Tennis Tournament

Small Things

An Amazing Feet of Athleticism

Ban the Open Loop

29 Monday Sep 2025

Posted by petersironwood in America, essay, HCI, politics, psychology, Uncategorized, user experience

≈ Leave a comment

Tags

AI, Democracy, life, technology, truth, USA

IMG_5372

Soon after I began the Artificial Intelligence Lab at a major telecom company, we heard about an opportunity for an Expert System. The company wanted to improve the estimation of complex, large scale, inside wiring jobs. We sought someone who qualified as an expert. Not only could we not locate an expert; we discovered that the company (and the individual estimators) had no idea how good or bad they were. Estimators would go in, take a look at what would be involved in an inside wiring job, make their estimate, and then proceed to the next estimation job. Later, when the job completed, no mechanism existed to relate the estimate back the actual cost of the job. At the time, I found this astounding. I’m a little more jaded now, but I am still amazed at how many businesses, large and small, have what are essentially no-learning, zero feedback, open loops.

As another example, some years earlier, my wife and I arrived late and exhausted at a fairly nice hotel. Try as we might, we could not get the air-conditioning to do anything but make the room hotter. When we checked out, the cashier asks us how our stay was. We explained that we could not get the air conditioning to work. The cashier’s reaction? “Oh, yes. Everyone has that trouble. The box marked “air conditioning” doesn’t work at all. You have to turn the heater on and then set it to a cold temperature.” “Everyone has that trouble”? Then, why hasn’t this been fixed? Clearly, the cashier has no mechanism or no motivation to report the trouble “upstream” or no-one upstream really cares. Moreover, this exchange reveals that when the cashier asks the obligatory question, “How was your stay?” what he or she really means is this: “We don’t really care what you have to say and we won’t do anything about it, but we want you to think that we actually care. That’s a lot cheaper and doesn’t require management to think.” Open Loop.

Lately, I have been posting a lot in a LinkedIn forum called “project management” because I find the topic fascinating and because I have a lot of experience with various projects in many different venues. According to some measure, I was marked as a “top contributor” to this forum. When I logged on the last time, a message surprised me that my contributions to discussions would no longer appear automatically because something I posted had been flagged as “spam” or a “promotion.” However, there is no feedback as to which post this was or why it was flagged or by whom or by what. So, I have no idea whether some post was flagged by an ineffectual natural language processing program or by someone with a grudge because they didn’t agree with something I said, or by one of the “moderators” of the forum.

LinkedIn itself is singularly unhelpful in this regard. If you try to find out more, they simply (but with far more text) list all the possibilities I have outlined above. Although this particular forum is very popular, it seems to me that it is “moderated” by a group of people who actually are using the forum, at least in many cases, as rather thinly veiled promotions for their own set of seminars, ebooks, etc. So, one guess is that the moderators are reacting to my having simply posted too many legitimate postings that do not point people back to their own wares. Of course, there are many other possibilities. The point here is that I do not have, nor can I easily assess what the real situation is. I have discovered however, that many others are facing this same issue. Open loop rears its head again.

The final example comes from trying to re-order checks today. In my checkbook, I came to that point where there is a little insert warning me that I am about to run out and that I can re-order checks by phone. I called the 800 number and sure enough, a real audio menu system answered. It asked me to enter my routing number and my account number. Fine. Then, it invited me to press “1” if I wanted to re-order checks. I did. Then, it began to play some other message. But soon after the message began, it said, “I’m sorry; I cannot honor that request.” And hung up. Isn’t it bad enough when an actual human being hangs up on you for no reason. This mechanical critter had just wasted five minutes of my time and then hung up. Note that no reason was given; no clue was provided to me as to what went wrong. I called back and the same dialogue ensued. This time, however, it did not hang up after I pressed “1” to reorder checks. Instead, it started to verify my address. It said, “We sent your last checks to an address whose zip code is “97…I’m sorry I’m having trouble. I will transfer you to an agent. Note that you may have to provide your routing number and account number again.” And…then it hung up.

Now, anyone can design a bad system. And, even a well designed system can sometimes mis-behave for all sorts of reasons. Notice however, that designers have provided no feedback mechanism. It could be that 1% of the potential users are having this problem. Or, it could be that 99% or even 100% of the users are having these kinds of issues. But the company lacks a way to find out. Of course, I could call my Credit Union and let them know. However, anyone that I get hold of at the Credit Union, I can guarantee, will have no possible way to fix this. Moreover, I am almost positive that they won’t even have a mechanism to report it. The check printing and ordering are functioned that are outsourced to an entirely different company. Someone in corporate, many years ago, decided to outsource the check printing, ordering, and delivery function. So people in the Credit Union itself are unlikely to even have a friend, uncle or sister-in-law who works in that “department” (as may have been the case 20 years ago). So, not only does the overall system lack a formal feedback mechanism; it also lacks an informal feedback mechanism. Tellingly, the company that provides the automated “cannot order your checks system” provides no menu option for feedback about issues either. So, here we have a financial institution with a critical function malfunctioning and no real process to discover and fix it. Open loop.

Some folks these days wax eloquent about the up-coming “singularity.” This refers to the point in human history where an Artificial Intelligence (AI) system will be significantly smarter than a human being. In particular, such a system will be much smarter than human beings when it comes to designing ever-smarter systems. So, the story goes, before long, the AI will design an even better AI system for designing better AI systems, etc. I will soon have much to say about this, but for now, let me just say, that before we proceed to blow too many trumpets about “artificial intelligence systems,” can we please first at least design a few more systems that fail to exhibit “artificial stupidity”? Ban the Open Loop!

Notice that sometimes, there may be very long loops that are much like open loops due to the nature of the situation. We send out radio signals in the hopes that alien intelligences may send us an answer. But the likely time frame is so long that it seems open loop. That situation contrasts with those above in the following way. There is no reason that feedback cannot be obtained, and rather quickly, in the case of estimating inside wiring, fixing the air conditioning signs, providing feedback on why there is “moderation” or in the faulty voice response system. Sports must provide a wonderful venue that is devoid of open loops. In sports, you see or feel the results of what you do almost immediately. But you underestimate the cleverness with which human beings are able to avoid what could be learned by feedback. Next time, we will explore that in more detail.

As I reconsider the essay above from the perspective of 2025, I see a federal government that has fully embraced “Open Loop” as a modus operandi — in some cases, they simply ignore the impact of their actions. In other cases, they do claim a positive impact but it is simply lies. For instance, it is claimed that tariffs are “working” in that foreign countries are paying money to America. That’s just an out and out lie. So, the entire government is operating with no real feedback. We are told that ICE will target violent gang members and dangerous criminals. The reality of their actions is completely disconnected from that.

The Trumputin Misadministration works with no loop at all that correctly relates stated goals, actions taken supposedly to achieve those goals, and the actual effects of those actions. That can only happen when the government accepts and celebrates corruption. But the destruction will not be limited to government actions and effects. It will tend to spread to private enterprise as well. Just to take one example, if unchecked by courageous and ethical individuals, sports events will become corrupted.

 

 

 

 

 

 

Photo by Mark Milbert on Pexels.com

There’s money to be made by “fixing” events and there will be pressure on athletes, managers, referees, to “fix” things so that the very wealthy can steal more money. Outcomes will no longer primarily be determined by training, skill, and heart. Of course, as fans learn over time that everything is fixed, the audience will diminish, but not to zero. Some folks will still find it interesting even if the outcome is fixed like the brutal conflicts in the movie Idiocracy, the lions eating Christians in the Roman circuses, or the so-called “sport” of killing innocent animals with high power guns. It’s not a sport when the outcome is slanted. Not only is it less interesting to normal folks but it doesn’t push people to test their own limits. There’s nothing “heroic” about it. Nothing is learned. Nothing is really ventured. And nothing is really gained. 

 

 

 

 

 

 

Photo by Gareth Davies on Pexels.com

———–

Where does your loyalty lie?

My Cousin Bobby

The First Ring of Empathy

The Orange Man

The Forgotten Field

Essays on America: The Game

Essays on America: Wednesday

Absolute is not Just a Vodka

How the Nightingale Learned to Sing

Travels with Sadie 1

The Walkabout Diaries

Plans for US; Some GRUesome

At Least he’s Our Monster

The Ant

The Self-Made Man

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • April 2026
  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • dogs
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 662 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...