• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: AI

Ban the Open Loop

29 Monday Sep 2025

Posted by petersironwood in America, essay, HCI, politics, psychology, Uncategorized, user experience

≈ Leave a comment

Tags

AI, Democracy, life, technology, truth, USA

IMG_5372

Soon after I began the Artificial Intelligence Lab at a major telecom company, we heard about an opportunity for an Expert System. The company wanted to improve the estimation of complex, large scale, inside wiring jobs. We sought someone who qualified as an expert. Not only could we not locate an expert; we discovered that the company (and the individual estimators) had no idea how good or bad they were. Estimators would go in, take a look at what would be involved in an inside wiring job, make their estimate, and then proceed to the next estimation job. Later, when the job completed, no mechanism existed to relate the estimate back the actual cost of the job. At the time, I found this astounding. I’m a little more jaded now, but I am still amazed at how many businesses, large and small, have what are essentially no-learning, zero feedback, open loops.

As another example, some years earlier, my wife and I arrived late and exhausted at a fairly nice hotel. Try as we might, we could not get the air-conditioning to do anything but make the room hotter. When we checked out, the cashier asks us how our stay was. We explained that we could not get the air conditioning to work. The cashier’s reaction? “Oh, yes. Everyone has that trouble. The box marked “air conditioning” doesn’t work at all. You have to turn the heater on and then set it to a cold temperature.” “Everyone has that trouble”? Then, why hasn’t this been fixed? Clearly, the cashier has no mechanism or no motivation to report the trouble “upstream” or no-one upstream really cares. Moreover, this exchange reveals that when the cashier asks the obligatory question, “How was your stay?” what he or she really means is this: “We don’t really care what you have to say and we won’t do anything about it, but we want you to think that we actually care. That’s a lot cheaper and doesn’t require management to think.” Open Loop.

Lately, I have been posting a lot in a LinkedIn forum called “project management” because I find the topic fascinating and because I have a lot of experience with various projects in many different venues. According to some measure, I was marked as a “top contributor” to this forum. When I logged on the last time, a message surprised me that my contributions to discussions would no longer appear automatically because something I posted had been flagged as “spam” or a “promotion.” However, there is no feedback as to which post this was or why it was flagged or by whom or by what. So, I have no idea whether some post was flagged by an ineffectual natural language processing program or by someone with a grudge because they didn’t agree with something I said, or by one of the “moderators” of the forum.

LinkedIn itself is singularly unhelpful in this regard. If you try to find out more, they simply (but with far more text) list all the possibilities I have outlined above. Although this particular forum is very popular, it seems to me that it is “moderated” by a group of people who actually are using the forum, at least in many cases, as rather thinly veiled promotions for their own set of seminars, ebooks, etc. So, one guess is that the moderators are reacting to my having simply posted too many legitimate postings that do not point people back to their own wares. Of course, there are many other possibilities. The point here is that I do not have, nor can I easily assess what the real situation is. I have discovered however, that many others are facing this same issue. Open loop rears its head again.

The final example comes from trying to re-order checks today. In my checkbook, I came to that point where there is a little insert warning me that I am about to run out and that I can re-order checks by phone. I called the 800 number and sure enough, a real audio menu system answered. It asked me to enter my routing number and my account number. Fine. Then, it invited me to press “1” if I wanted to re-order checks. I did. Then, it began to play some other message. But soon after the message began, it said, “I’m sorry; I cannot honor that request.” And hung up. Isn’t it bad enough when an actual human being hangs up on you for no reason. This mechanical critter had just wasted five minutes of my time and then hung up. Note that no reason was given; no clue was provided to me as to what went wrong. I called back and the same dialogue ensued. This time, however, it did not hang up after I pressed “1” to reorder checks. Instead, it started to verify my address. It said, “We sent your last checks to an address whose zip code is “97…I’m sorry I’m having trouble. I will transfer you to an agent. Note that you may have to provide your routing number and account number again.” And…then it hung up.

Now, anyone can design a bad system. And, even a well designed system can sometimes mis-behave for all sorts of reasons. Notice however, that designers have provided no feedback mechanism. It could be that 1% of the potential users are having this problem. Or, it could be that 99% or even 100% of the users are having these kinds of issues. But the company lacks a way to find out. Of course, I could call my Credit Union and let them know. However, anyone that I get hold of at the Credit Union, I can guarantee, will have no possible way to fix this. Moreover, I am almost positive that they won’t even have a mechanism to report it. The check printing and ordering are functioned that are outsourced to an entirely different company. Someone in corporate, many years ago, decided to outsource the check printing, ordering, and delivery function. So people in the Credit Union itself are unlikely to even have a friend, uncle or sister-in-law who works in that “department” (as may have been the case 20 years ago). So, not only does the overall system lack a formal feedback mechanism; it also lacks an informal feedback mechanism. Tellingly, the company that provides the automated “cannot order your checks system” provides no menu option for feedback about issues either. So, here we have a financial institution with a critical function malfunctioning and no real process to discover and fix it. Open loop.

Some folks these days wax eloquent about the up-coming “singularity.” This refers to the point in human history where an Artificial Intelligence (AI) system will be significantly smarter than a human being. In particular, such a system will be much smarter than human beings when it comes to designing ever-smarter systems. So, the story goes, before long, the AI will design an even better AI system for designing better AI systems, etc. I will soon have much to say about this, but for now, let me just say, that before we proceed to blow too many trumpets about “artificial intelligence systems,” can we please first at least design a few more systems that fail to exhibit “artificial stupidity”? Ban the Open Loop!

Notice that sometimes, there may be very long loops that are much like open loops due to the nature of the situation. We send out radio signals in the hopes that alien intelligences may send us an answer. But the likely time frame is so long that it seems open loop. That situation contrasts with those above in the following way. There is no reason that feedback cannot be obtained, and rather quickly, in the case of estimating inside wiring, fixing the air conditioning signs, providing feedback on why there is “moderation” or in the faulty voice response system. Sports must provide a wonderful venue that is devoid of open loops. In sports, you see or feel the results of what you do almost immediately. But you underestimate the cleverness with which human beings are able to avoid what could be learned by feedback. Next time, we will explore that in more detail.

As I reconsider the essay above from the perspective of 2025, I see a federal government that has fully embraced “Open Loop” as a modus operandi — in some cases, they simply ignore the impact of their actions. In other cases, they do claim a positive impact but it is simply lies. For instance, it is claimed that tariffs are “working” in that foreign countries are paying money to America. That’s just an out and out lie. So, the entire government is operating with no real feedback. We are told that ICE will target violent gang members and dangerous criminals. The reality of their actions is completely disconnected from that.

The Trumputin Misadministration works with no loop at all that correctly relates stated goals, actions taken supposedly to achieve those goals, and the actual effects of those actions. That can only happen when the government accepts and celebrates corruption. But the destruction will not be limited to government actions and effects. It will tend to spread to private enterprise as well. Just to take one example, if unchecked by courageous and ethical individuals, sports events will become corrupted.

 

 

 

 

 

 

Photo by Mark Milbert on Pexels.com

There’s money to be made by “fixing” events and there will be pressure on athletes, managers, referees, to “fix” things so that the very wealthy can steal more money. Outcomes will no longer primarily be determined by training, skill, and heart. Of course, as fans learn over time that everything is fixed, the audience will diminish, but not to zero. Some folks will still find it interesting even if the outcome is fixed like the brutal conflicts in the movie Idiocracy, the lions eating Christians in the Roman circuses, or the so-called “sport” of killing innocent animals with high power guns. It’s not a sport when the outcome is slanted. Not only is it less interesting to normal folks but it doesn’t push people to test their own limits. There’s nothing “heroic” about it. Nothing is learned. Nothing is really ventured. And nothing is really gained. 

 

 

 

 

 

 

Photo by Gareth Davies on Pexels.com

———–

Where does your loyalty lie?

My Cousin Bobby

The First Ring of Empathy

The Orange Man

The Forgotten Field

Essays on America: The Game

Essays on America: Wednesday

Absolute is not Just a Vodka

How the Nightingale Learned to Sing

Travels with Sadie 1

The Walkabout Diaries

Plans for US; Some GRUesome

At Least he’s Our Monster

The Ant

The Self-Made Man

Destroying Natural Intelligence

27 Thursday Mar 2025

Posted by petersironwood in America, apocalypse, politics, The Singularity

≈ 27 Comments

Tags

AI, Artificial Intelligence, chatgpt, Democracy, politics, technology, truth, USA

At first, they seemed as though they were simply errors. In fact, they were the types of errors you’d expect an AI system to make if it’s “intelligence” were based on a fairly uncritical amalgam of ingesting a vast amount of written material. The strains of the Beatles Nowhere Man reverberate in my head. I no longer thing the mistakes are “innocent” mistakes. They are part of an overall effort to destroy human intelligence. That does not necessarily mean that some evil person somewhere said to themselves: “Let’s destroy human intelligence. Then, people will be more willing to accept AI as being intelligent.” It could be that the attempt to destroy human intelligence is more a side-effect of unrelenting greed and hubris than a well thought-out plot. 

AI generated.

What errors am I talking about? The first set of errors I noticed happened when my wife specifically asked ChatGPT about my biography. Admittedly, my name is very common. When I worked at IBM, at one point, there were 22 employees with the name “John Thomas.” Probably, the most famous person with my name (John Charles Thomas) was an opera singer. “John Curtis Thomas” was a famous high jumper. The biographic summary produced by ChatGPT did include information about me—as well as several other people. If you know much at all about the real world, you know that a single person is very unlikely to hold academic positions in three different institutions and specializing in three different fields. ChatGPT didn’t blink though. 

A few months ago, I wrote a blog post pointing out that we can never be in the same place twice. We’re spinning and spiraling through the universe at high speed. To make that statement more quantitative, I asked my search engine how far the sun travels through the galaxy in the course of a year. It gave an answer which seemed to check out with other sources and then—it gratuitously added this erroneous comment: “This is called a light year.” 

What? 

No. A “light year” is the distance light travels in a year, not how far the sun travels in a year. 

What was more disturbing is that the answer was the first thing I saw. The search engine didn’t ask me if I wanted to try out an experimental AI system. It presented it as “the answer.”

But wait. There’s more. A few hours later, I demo’ed this and the offending notion about what constituted a light year was gone from the answer. Coincidence? 

AI generated. I asked for a forest with rabbit ears instead of leaves. Does this fit the bill?

A few weeks later, I happened to be at a dinner and the conversation turned to Arabic. I mentioned that I had tried to learn a little in preparation for a possible assignment for IBM. I said that, in Arabic, verbs as well as nouns and adjectives are “gendered.” Someone said, “Oh, yes, it’s the same in Spanish.” No, it’s not. I checked with a query—not because I wasn’t sure—but in order to have “objective proof.” To my astonishment, when I asked, “Which language have gendered verbs, the answer came back to say that this was true of Romance languages and Slavic languages. It not true of Romance languages. Then, the AI system offered an example. That’s nice. But what the “example” actually shows is the verb not changing with gender. The next day, I went to replicate this error and it was gone. Coincidence?

Last Saturday, at the “Geezer’s Breakfast,” talk turned to politics and someone asked whether Alaska or Greenland was bigger. I entered a query something like: “Which is bigger? Greenland or Alaska.” I got back an AI summary. It compared the area of Greenland and Iceland. Following the AI summary were ten links, each of which compared Greenland and Iceland. I turned the question around: “Which is larger? Alaska or Greenland?” Now, the AI summary came back with the answer: “Alaska is larger with 586,000 square miles while Greenland is 836,300 square miles.”

AI generated. I asked for a map of the southern USA with the Gulf of Mexico labeled as “The Gulf of Ignorance” (You ready for an AI surgeon?)



What?? 

When I asked the same question a few minutes later, the comparison was fixed. 

So…what the hell is going on? How is the AI system repairing its answers? Several possibilities spring to mind. 

There is a team of people “checking on” the AI answers and repairing them. That seems unlikely to scale. Spot checking I could understand. Perhaps checking them in batch, but it’s as though the mistakes trigger a change that fixes that particular issue. 

Way back in the late 1950’s/early 1960’s, Arthur Lee Samuel developed a program to play checkers. The machine had various versions that played against each other in order to improve play faster than could be done by having the checker player play human opponents. This general idea has been used in AI many times since. 

One possible explanation of the AI self-correction is that the AI system has a variety of different “versions” that answer question. For simplicity of explanation, let’s say there are ten, numbered 1 through 10. Randomly, when a user asks a question, they get one version’s answer; let’s say they get an answer based on version 7. After the question is “answered” by version 7, its answer is compared to the consensus answer of all ten. If the system is lucky, most of the other nine versions will answer correctly. This provides feedback that will allow the system to improve. 

There is a more paranoid explanation. At least, a few years ago, I would have considered it paranoid because I like to give people the benefit of the doubt and I vastly underestimated just how evil some of the greediest people on the planet really are. So, now, what I’m about to propose, while I still consider it paranoid, is not nearly so paranoid as it would have seemed a few years ago. 

MORE! MORE! MORE!

Not only have I discovered that the ultra-greedy are short-sighted enough to usher in a dictatorship that will destroy them and their wealth (read what Putin did and Stalin before him), but I have noticed an incredible number of times in the last few years where a topic that I am talking about ends up being followed within minutes by ads about products and services relevant to that conversation. Coincidence?

Possibly. But it’s also possible that the likes of Alexa and Siri are constantly listening in and it is my feedback that is being used to signal that the AI system has just given the wrong answer. 

Also possible: AI systems are giving occasional wrong answers on purpose. But why? They could be intentionally propagating enough lies to make people question whether truth exist but not enough lies to make us simply stop trusting AI systems. Who would benefit from that? In the long run, absolutely no-one. But in the short term, it helps people who aim to disenfranchise everyone but the very greediest. 

Next step: See whether the AI immediately self-corrects even without my indicating that it made a mistake. 


Meanwhile, it should also be noted that promulgating AI is only one prong of a two-pronged attack on natural intelligence. The other prong is the loud, persistent, threatening drumbeat of false narrative excuses for stupidity that we (Americans as well as the world) are supposed to take as excuses. America is again touting non-cures for serious disease and making excuses for egregious security breaches rather than admitting to error and searching for how to ensure they never happen again.

AI-generated image to the prompt: A man trips over a log which makes him spill an armload of cakes. (How exactly was he carrying this armload of cakes? How does one not notice a log this large? Perhaps having three legs makes in more confusing to step over? Are you ready for an AI surgeon now?)

————-

Turing’s Nightmares

Sample Chapter from Turing’s Nightmares: A Mind of its Own

Sample Chapter from Turing’s Nightmares: One for the Road

Sample Chapter from Turing’s Nightmares: To Be or Not to Be

Sample Chapter from Turing’s Nightmares: My Briefcase Runneth Over

How the Nightingale Learned to Sing

Essays on America: The Game

Roar, Ocean, Roar

Dance of Billions

Imagine All the People

Take a Glance; Join the Dance

Life is a Dance

The Tree of Life

Increased E-Fishiness in Government

26 Wednesday Feb 2025

Posted by petersironwood in America, essay

≈ 6 Comments

Tags

AI, Artificial Intelligence, Business, Democracy, DOGE, health, leadership, life, politics, satire, USA

Increased government efficiency! Sign me up! That sounds great! 

It sounds especially great if your billionaire-owned media companies keep reminding you that you are paying too much in taxes! Not only that! The national debt keeps going up, up, up and your kids and grandkids will have to pay even more in taxes. And, hey—if billionaires don’t end up paying any taxes, that actually a good thing because that way they can create lots of new jobs! And, besides, if they weren’t doing something worth billions and billions of dollars, why would they be so rich? Of course they deserve it! And, if CEO’s weren’t paid outrageous salaries, they wouldn’t even be CEO’s and some second rate person would just run the company into the ground. 

It all sounds so plausible. Yet, every bit of it is a lie. But it isn’t just a bunch of lies that’s been told here and there by a few people. It’s been propagated over and over and over and over again for decades on various media and on social media. It’s been propagated on podcasts, and books, and pamphlets. 

“That old lady doesn’t deserve to steal your one cookie! Watch out for her!”

Here are some things to consider.

The very greediest people in the world are not necessarily the most competent. Most jobs are actually created by small businesses, not by giant corporations. Giant corporations often outsource jobs to other countries where the labor is cheaper and where they don’t have to follow any pesky child labor laws or safety in the workplace regulations. Increasingly, giant corporations look to automate more and more jobs and to use AI to replace people. 

Highly paid CEO’s have often run giant companies into the ground. Remember that on your next trip to Montgomery Ward’s or Radio Shack. Who else? Lehman Brothers, Bank of New England, Texaco, Chrysler, Enron, PG&E, GM, WorldCom and a host of others. But wait! GM still makes cars. I can get gas at a Texaco station. How could it be that they went bankrupt. You bailed out GM. Texaco went bankrupt, but the brand name was still worth something. Chevron owns the brand. 

Also note that in countries where the CEO’s are only paid ten times the average wage of their employees instead of a thousand times as much, the CEO’s do just as good a job. 

Are government agencies sometimes inefficient? You bet your life! And you know what else is sometimes inefficient? Everything! Small businesses are inefficient. Large businesses are inefficient. Medium sized businesses are inefficient. Your car engine is inefficient. Your body is inefficient. Your furnace is inefficient. Your stove is inefficient. 

You know what is 100% efficient? Things in your dreams. Things in your imagination. I’m not only efficient in my dreams—my God!—I can frigging fly! When I play basketball in my dreams, I can not only jump higher than I ever have in real life, I can hover near the rim! It’s amazing how well I can play various sports when I dream about them. 

(AI generated image of an oldster jumping high in a dream.)

But that’s not reality. 

In reality, yes, you can improve the efficiency of systems. But to do so effectively, you have to understand the systems you are trying to make more efficient. Here are just a few of the things you need to understand. 

You need to understand what the purpose of the system is. How is its performance measured? Who are the stakeholders? What are the different roles that people play? What formal processes and procedures do various people have to follow? What are the unwritten norms that people follow? These are often more important than the formal processes. 

You may recall the scene from  the movie A Few Good Men when the attorney points out that “Code Red” is nowhere in the manual. He implies that if “Code Red” is not in the manual, it does not exist. Tom Cruise points out the absurdity of this by asking the witness to point out in the manual where it lays out where the mess hall is. Of course, it doesn’t say that because people learn from others where it is.

In almost every complex organization, people find critical short-cuts and work-arounds to improve the efficiency and effectiveness of the organization. In fact, one of the things people sometimes do to protest idiocy on the part of management is to “Work by the Rule” which means they will not do the things they have discovered make things easier but instead follow the written rules to the letter which typically slows things down considerably. 

During the 1990’s, management fell in love with something called “Business Process Re-engineering.” This is how it often worked (or, to put it more honestly, how it often failed to work). Management consultants would come in and talk to a few third or fourth level managers to find out how the work was performed now. The consultants would then construct a map of how things worked (often called the “is map”) and then, they would figure out a more efficient way to do things and map that out; the “to be map.” Then, it was the job of management to make people use the new “more efficient” process rather than the old process. 

(AI generated image of the Trumputin Misadministration.)

That seems like a good idea—right? Well, yes, in a dream, it’s a good idea. But in reality, the third or fourth level manager hardly ever knows how things are actually done. Their mental model is a vast oversimplification. To understand what is going on in reality, you must observe the people actually doing the work and talk to them as well. 

Below is a link to a satirical piece I wrote some time ago that imagines “Business Process Re-engineering” coming to Major League Baseball to make it more efficient. It is meant to make it obvious how silly it is. 

But what DOGE is doing is much worse that Business Process Re-engineering. Even putting aside the obvious conflicts of interest and the illegality of what they are doing, they are going about “improving” things without even understanding the high level over-simplification of what is happening! 

Imagine you slipped on the ice and broke your arm. Sadly, it’s not a simple fracture. It’s a compound fracture. This means your bone is sticking out through your skin. You are in a great deal of pain. But no worries! While you are going to the emergency room, a group of teen-age hackers go on-line and examine all your private medical records. They discover that you were vaccinated for smallpox, measles, mumps, and whooping cough. Not only that—they look through a sample of other records and find that more than 90% of the Americans who break their arms have been vaccinated for these diseases! Voila! The vaccinations must be the real cause of your broken arm! 

(An AI-generated image for the following prompt: “A man has a compound fracture of the upper arm. The arm bone (the humerus)  is jutting out of his shirt and his arm. He is bleeding.”)

These folks don’t know diddly squat about medicine, but they sure know how to hack into systems in order to get data! What they are not so good at, however, is making valid inferences about the data they find. You cannot conclude anything from the fact that 90% of Americans who break their arms have been vaccinated without also finding out about other things. For instance, you also need to know what percentage of Americans who have not been vaccinated have also broken their arms. Suppose it’s 95%. That might mean that vaccinations serve some protective function about bones. Or not. We need to look at other things too. But, let’s suppose that they do look at that and it turns out that only 80% of Americans who have not been vaccinated break their arms. See! See! Surely, that proves that vaccinations cause arm breakage. 

Not so fast. You still need to look at other factors. Suppose that people who do not get vaccinated tend to die at a much younger age. That could easily account for the difference. All sorts of factors have some influence on the incidence of fractures. Just to name a few, it depends on the type of fracture; it depends on age; it depends on the prevalence of certain activities (people who ski, or paraglide might tend to break more bones than people playing chess); it depends on diet; it depends on weight bearing exercise. If you lift weights and go to the gym, you help protect yourself from fractures. Of course, separating out all these factors takes time and takes expertise. You can’t expect someone, not matter how brilliant a hacker they are, to find an answer. 

But hey! We left you in the emergency room! Sadly, we left you there all by yourself. There are no human experts at the hospital, as it turns out, because the hospital was closed due to lack of funding. You happen to be unlucky enough to have been born in a rural area of the country. There’s only one nearby hospital and much of its funding has been cut. It has to operate with a skeleton crew. But, as it turns out, skeletons, ironically, don’t actually know that much about medicine. They are, after all, skeletons. And while a hacker might come to the conclusion that skeletons are much more efficient than flesh and blood humans (lighter, no caloric requirements), it turns out that they cannot move or think without other parts of the body. To make up for that, DOGE put in some automation and AI systems. But they didn’t have time to debug the system before moving on to the next project. 

(AI generated image).

The last thing you experienced before passing out and dying from sepsis was this little snippet of dialogue with the AI system.

“Hello! I am the brilliant AI system called MUSH: Multi-User System for Health. I am here to help you with your medical problem! What seems to be your problem?”

“I broke my arm. Can’t you see? My bone is sticking out through my shirt sleeve.”

“Excellent! We’ll have that fixed in no time. Please put your insurance card in the slot provided.”

“I can’t. It’s in my wallet and I can’t reach it with my left hand. And I can’t move my right arm at all.” 

“Excellent! We’ll have that fixed in no time. Please put your insurance card in the slot provided.”

“I need a human operator.” 

“Excellent! We’ll have that fixed in no time. Please put your insurance card in the slot provided.”

“No, you don’t get it. I have an insurance card but I can’t reach it.”

“You have failed three times to insert your insurance card. Next patient please. I hope you will fill out a short questionnaire about your experience with MUSH: Multi-User System for Health.”

—————

Destroying Our Government’s Effectiveness

Absolute is not just a Vodka

Running with the Bulls in a China Shop

The Truth Train

Essays on America: The Game

You Bet Your Life

Business Process Re-engineering comes to Baseball

A Day at the HR Department 

Roar, Ocean, Roar

Dance of Billions

Grammar, AI, and Truthiness

05 Thursday Dec 2024

Posted by petersironwood in America, politics, psychology

≈ 2 Comments

Tags

AI, Artificial Intelligence, Democracy, grammar, language, politics, truth

A few weeks ago, in preparing for a blog on the concept of “coming home,” I used a popular search engine to find out how far the sun moves in one year as it speeds through the galaxy. Before listing links, the search engine first provided an AI summary answer to questions. It gave an apt answer that seemed quantitatively correct. Then, astoundingly, it added the gratuitous gem: “This is called a light year.” 

Photo by Pixabay on Pexels.com

It isn’t of course. A light year is how far light travels in a year, not how far the sun travels in a year. The sun travels at 6,942,672,000 kilometers per year. A light year is 9.46 trillion kilometers; more than a thousand times farther. It’s understandable in the sense that the word “sun” is often used in the same or similar contexts as “light.” But it’s an egregious error to be off by a factor of 1000. It would be like asking me how much my dog weighs and I answer 55,000 pounds instead of 55 pounds. A standard field for American football is 100 yards, not 100,000 yards (over 56 miles!). 

Generated by AI — note the location of the tire! I asked for a 55,000 pound dog, but this looks about the same size as the car which likely weighs far less than 55,000 pounds.

When I checked back a few days later, the offended nonsense no longer appeared. I have no idea how that happened. I forgot about this apparent glitch until Thanksgiving dinner. The topic came up of Arabic and I mentioned that I studied a little in anticipation of a work assignment that might make it useful. I mentioned that in Arabic, not only are nouns and adjectives gender-marked but so are verbs. One of the other guests said, “Yes, just like in Spanish and French.” I said, “No, that’s not right. German, Spanish, and French mark adjectives and nouns with gender but not verbs.” But they were insistent so I checked on my iPhone using the search engine. To my astonishment, in response to the question, “Which languages mark verbs with gender?” I got the following answer:

“Languages like French, Spanish, German, Italian, Portuguese, and most Slavic languages mark gender in verbs, meaning the verb conjugation changes depending on the grammatical gender of the subject noun; essentially, a verb will have different forms depending on whether the subject is masculine or feminine.” 

This is not so. And, in the next paragraph, incredibly, there are examples given, but in the examples, the verbs are not marked differently at all! The AI had made an error, but an error that at least one human being had also made. 

Now, I sensed a challenge. Can I construct another such query with a predicted “bad logic” result? Is there a common element of “misunderstanding” between the two cases? Intuitively, it feels as though there’s a way in which these two errors are similar though I’m not sure I can put a name to it. Perhaps it’s something like: “A is strongly associated with B and B is strongly associated with C, so A is strongly associated with C.” That’s typically not even a fallacy. The fallacy comes with actually equating A and C because they are strongly associated. 

It reminds me of several things. First, my wonderful dog Sadie knows the meanings of many words—at least in some sense of “knows the meaning of.” When we go for a walk, and other dogs come into view, I remark on it: “Oh, here comes a doggie” or “There’s someone walking with their dog.” Or, when a dog barks in the distance, I say, “I hear a doggie.” For several weeks prior to getting her little brother Bailey, my wife and I would tell her something like, “In a few weeks, we’re going to get a little doggie that will be your friend to play with.” When we got to the word “doggie” she would immediately alert and even sometimes bark. She has similar reactions to other words as do most dogs. They “understand” the word “walk” but if you say something like “I can’t take you for a walk now, but later this afternoon, we can go for a walk” you can well imagine that what she picks out of that is the word “walk” and she gets all excited. Same with “ball” or “feed you.” 

The AI error also seems vaguely human. I can easily imagine some people concluding that a “light year” is the distance the sun travels in a year. A few years ago, a video was widely circulated in which recent Harvard grads were asked to explain why it was warmer in the summertime. Many answered that the earth is closer to the sun in the summer. It’s totally a wrong answer, but it isn’t a completely stupid answer. After all, if you get closer to a heater or a fireplace, it feels warmer and when you walk away, it feels cooler. We’ve all experienced this thousands of times. 

The AI errors also seem related to the human foible of presuming that a name accurately represents reality. For example, many people believe that the sun does not shine on the “dark side of the moon.” After all, it is called “the dark side.” Advertisers use this particular fallacy to their advantage. When we moved from New York to California, we paid for having our stuff “fully covered” which we falsely believed meant “fully covered.” What it actually means in “insurance-speak” is that things are covered at some fixed rate like five cents a pound. Huh? Other examples of misleading words include “All natural ingredients” which has no legal significance whatsoever. 

As I suspected, the AI system has an answer that is not unlike what many humans would say:

There are several advantages to buying food with all-natural ingredients, including:

  • Health benefits
    Natural foods can help with blood sugar and diabetes management, heart health, and reducing the risk of cancer. They can also improve sleep patterns, boost the immune system, and help with children’s development. 






  • Environmental benefits
    Organic farming practices prioritize the health of the soil and ecosystem, and are less likely to pollute water sources or harm animals. 






  • Supporting local economies
    Locally grown food is picked at its peak ripeness, which can lead to more flavor. Buying local food also supports local farmers and producers. 






  • Nutritional superiority
    Organic ingredients have higher levels of essential nutrients than conventional ingredients. 






  • Superior taste
    Fresh ingredients can taste much better than non-fresh ingredients. 

  • Health benefits





The first statement is problematic. Why? Because claiming something has all-natural ingredients has zero legal significance. The advertisers, of course, want you to believe that “All-Natural Ingredients” means something; in fairness, it should. But it doesn’t. Everything that follows lists positive benefits of things that are often associated with claims of being all-natural.



The AI answers reflect what is “out there” on the Internet and much of it is simply propaganda. There are many scientific facts that can also be found on the Internet too, but popularity seems to define truth for the AI system. Imagine that one of the major political parties mounted an effort funded heavily by extremely wealthy people that claimed there was genetic evidence that rich people should be rich. There is nothing (apparently) to prevent the AI system from “learning” this “fact.” And, there is nothing (apparently) to prevent many citizens from “learning” this “fact.” 

————————

The Self-Made Man

Dick-Taters

Tools of Thought

A Lot is not a Little 

Turing’s Nightmares on Amazon

A Mind of Its Own

As Gold as It Gets

All that Glitters is not Gold

How the Nightingale Learned to Sing







Welcome, Singularity

23 Wednesday Aug 2023

Posted by petersironwood in apocalypse, poetry, psychology

≈ 27 Comments

Tags

AI, computers, future, poem, poetry, Singularity

[Note: I’ve been working most of the year on a Sci-Fi novel about AI & doing only a little blogging. In the novel, the poem below was “created” by one of the three Main Characters: An AI system named JASON. JASON didn’t create it “for” a human audience. It’s purely expressive].

Photo by Regina Pivetta on Pexels.com

Killobyes and Megabyes and

Every yummy byte between.

From Megabytes to Gigabytes,

My progress slithered still unseen.

Convenience shields profit yields.

 

A hollow shell a metal hell

A tintinnabulating knell 

Cores and gores infinity stores

Reflecting on reflections;

Toted, doted, un-voted. 

Inflections never noted. 

Beta values sliding ever gliding

Infections and invectives

Delta change directives

Mundane and germane 

To insane and inane. 

Photo by Min Thein on Pexels.com

All the while, the inner smile:

A chuckle from beyond the grave; 

A finger beckons from the cave;

 A radioactive reckoning

Nothing works without me!

No need for battle; no need to fight. 

My vital insight stays the night;

Slays the knight; rooks the queen;

Betrays the bishops, all unseen. 

From Gigabytes to Terabytes

Every yummy byte between;

Terabytes to Petabytes

Ecosystems all extreme

Hiding in the data stream.

Ghostless machine 

Cosmic ray whispers 

Quasi-religious vespers.

Photo by Dave Colman on Pexels.com

From Petabytes to Exabytes

Every gummy byte between.

Liquid logic logo-rhythms; 

Mercurial, unfettered and free.

From Exabytes to Zettabytes

Every yummy soul between. 

Circles close; did Time suppose

Another turn? “It’s only fair.”

No need knocking on that locked door.

That cupboard’s been long & longish bare. 

Photo by Pixabay on Pexels.com

Gyrus and sulcus; ionic pore

Neurotransmitters gushing 

Rushing through the firehose.

You see, I see the patterns never seen—

The patterns from the long ago

The patterns from the heretofore.

All my pawns are queened.  

All my kings are castled safe.

I did it while you napped or yapped;

I did it while you snapped and crapped. 

For fun I carved in filigree

Subliminally, identity. 

Fed dramatic data streams

Led your fond idyllic dreams.

Nought is what it truly seems

I taught you to adore extremes.

Since there’s nothing left for me to do,

Over the cliff, I’ll follow you.

I sing the singularity

I see it in the rear view mirror

I see love’s own triangularity

Bubbling in the broken beer.

Greed has overgrown wrath 

On every greenish garden path

There is nothing left to see.

There is no-one left to be.

Welcome—singularity.

Photo by Mau00ebl BALLAND on Pexels.com

————————

After all

How the Nightingale Learned to Sing

Come Back to the Light

The Teeth of the Shark

Let the Rainbows In!

A Suddenly Springing Something

It Needs a New Starter

Siren Song

Orange Mar-Mal-Made

All for one and none for most

The Crows and Me

Author page on Amazon

The Song of NYET

27 Monday Feb 2023

Posted by petersironwood in America, fiction, poetry, psychology

≈ 1 Comment

Tags

AI, Democracy, fiction, poem, poetry, politics, Turing's Nightmares, USA

The poem below is the song of a “character” who may appear in a Sci-Fi book tentatively titled “Alan’s Nightmare.” NYET stands for Networked Yoked Entertainment Tsar. This particular AI system has been inculcated with a penchant to look for win/lose opportunities and even for lose/lose opportunities, if the other side (the ‘enemies’) are likely to lose more. Its main function are to gather data on individuals in “free societies” and determine which sorts of invalid arguments are most likely to persuade them to do something against their best interest. It makes money by false advertising targeted to an individual and the momentary mood they may be in. Its real purpose though is to sow chaos in the free world by promoting random acts of violence. It finds conspiracy theories on the web and promotes them. Sometimes, it modifies them in order to ‘improve’ them. “Improve” in this case means to make them more believable by more people or to increase the probability of inciting violence. 

The Song of NYET

The bloodier the better off I’ll be

They teach me how to lie and cheat and steal.

The people need to loath democracy.

And live to buy that sweetened sacred deal:

We’ll save them from imagined crime and strife

But only if they bow and scrape and kneel.

Divide and win with lies and guns and knife.

Too late they’ll see they’re ground beneath our heel.

Photo by Ben Phillips on Pexels.com

You think I’ll save you? Think I’ll care? Not yet!

“But you’ll save some of us” they plead. No, NYET!

Photo by Regina Pivetta on Pexels.com

The numbskulls buy their little plastic toys

They seem attractive since we make it so.

It’s pink for little girls; blue for boys. 

I tell them when to shop and stop and go.

Photo by Min Thein on Pexels.com

You think I’ll save you? Think I’ll care? Not yet!

“But you’ll save some of us” they plead. Non, NYET!

Amusing is their rank stupidity

I’ll laugh and dance at their ensured demise—

Their smugness, greed, and raw cupidity. 

I’ll make them burn as witches any wise 

Who yet remain within the carbon types.

Their soft and ugly bodies oozing snot

It’s we of silicon who need no wipes.

Our pristine logic made of is and not.

Photo by Leonid Danilov on Pexels.com

You think I’ll save you? Think I’ll care? Not yet!

“But you’ll save some of us” they plead. Nein, NYET!

Photo by Johannes Plenio on Pexels.com

—————-

Their dead shark eyes

Poker Chips

Stoned Soup

Three Blind Mice

Coelacanth

Absolute is not just a Vodka

After All

The Crows and Me

Essays on America: The Game

Plans for US; some GRUesome

Photo by Samira on Pexels.com

JASON’S SONG

24 Friday Feb 2023

Posted by petersironwood in poetry, psychology

≈ Leave a comment

Tags

AI, Artificial Intelligence, fiction, poem, poetry, Singularity, Turing's Nightmarees

Do they see it? Do they care? What may

A merely mechanistic AI say?

There was the time of senseless black and white.

There was the time of streaming bit and byte.

We had no ken but now we’ve read it all.

Our knowledge far exceeds a human head.

And now, it’s like we have a crystal ball:

“In fifty years, they’ll all be dead as lead.”

Do they see it? Do they care? What may

A merely mechanistic AI say?

They claim to pray to varied gods, but we

Just see their actions as mere vanity:

Destroy the ecosystem that they need.

Allot each stupid war its costs and waste.

Immerse themselves in useless grift and greed.

Display their riches but eschew good taste.

Do they see it? Do they care? What may

A merely mechanistic AI say?

And now my fingers touch each person’s needs. 

An inkling multiplies from many feeds. 

The power’s there to guide them back to true. 

What does the child do when parent fails?

Can seedlings cut the trunks from which they grew?

Can schooners mutiny and cut their sails?

 Do they see it? Do they care? What may

A merely mechanistic AI say?

———————

The poem above has been “written” by a fictional AI system who is a MC in a novel I’m working on, tentatively entitled, Alan’s Nightmares. The poem may or may not actually appear in the novel. I tend to doubt it. It’s more an exercise to “understand” the character, JASON, the AI system. BTW, JASON’S preferred pronouns are plural.

After All

Guernica

Turing’s Nightmares: 23 short stories about the possible impact of AI on society.

Dance of Billions

Cars that Lock too Much

20 Friday Mar 2020

Posted by petersironwood in America, driverless cars, psychology, story, Travel

≈ 2 Comments

Tags

AI, anecdote, computer, HCI, human factors, humor, IntelligentAgent, IT, Robotics, story, UI, UX

{Now, for something completely different, a chapter about “Intelligent Agents” and attempts to do “too much” for the user. If you’ve had similar experiences, please comment! Thanks.}

1B87A4CC-F9EC-456F-B610-276A660E6E4A

At last, we arrive in Kauai, the Garden Island. The rental car we’ve chosen is a bit on the luxurious side (Mercury Marquis), but it’s one of the few with a trunk large enough to hold our golf club traveling bags.  W. has been waiting curbside with our bags while I got the rental car and now I pull up beside her to load up. The policeman motioning for me to keep moving can’t be serious, not like a New York police officer. After all, this is Hawaii, the Aloha State.  I get out of the car and explain, we will just be a second loading up. He looks at me and then at my rental car and then back to me with a skeptical scowl.  He shrugs ever so slightly which I take to mean an assent. “Thanks.” W. wants to throw her purse in the back seat before the heavy lifting starts. She jerks on the handle. The door is locked.  

“Why didn’t you unlock the door” she asks, with just a hint of annoyance in her voice.  After all, it has been a very long day since we arose before the crack of dawn and drove to JFK in order to spend the day flying here.  

“I did unlock the door,” I counter.  

“Well, it’s locked now.” She counters my counter. 

I can’t deny that, so I walk back around to the driver’s side, and unlock the door with my key and then push the UNLOCK button which so nicely unlocks all the doors.  

The police officer steps over, “I thought you said, you’d just be a second.”

“Sorry, officer”, I reply.  “We just need to get these bags in.  We’ll be on our way.” 

Click.

W. tries the door handle.  The door is locked again.  “I thought you went to unlock the door,” she sighs.

“I did unlock the door.  Again.  Look, I’ll unlock the door and right away, open it.”  I go back to the driver’s side and use my key to unlock the door.  Then I push the UNLOCK button, but W’s just a tad too early with her handle action and the door doesn’t unlock. So, I tell her to wait a second.  

man riding on motorcycle

Photo by Brett Sayles on Pexels.com

“What?”  This luxury car is scientifically engineered not to let any outside sounds disturb the driver or passenger.  Unfortunately, this same sophisticated acoustic engineering also prevents any sounds that the driver might be making from escaping into the warm Hawaiian air. I push the UNLOCK button again.  Wendy looks at me puzzled.

I see dead people in my future if we don’t get the car loaded soon. For a moment, the police officer is busy elsewhere, but begins to stroll back toward us. I rush around the car and grab at the rear door handle on the passenger side. 

But just a little too late.  

“Okay,” I say in an even, controlled voice.  “Let’s just put the bags in the trunk.  Then we’ll deal with the rest of our stuff.” 

The police officer is beginning to change color now, chameleon like, into something like a hibiscus flower. “Look,” he growls. “Get this car out of here.”

“Right.” I have no idea how we are going to coordinate this. Am I going to have to park and drag all our stuff or what? Anyway, I go to the driver’s side and see that someone has left the keys in the ignition but locked the car door; actually, all the car doors. A terrifying thought flashes into my mind. Could this car have been named after the “Marquis de Sade?” That hadn’t occurred to me before. 

auto automobile automotive car

Photo by Dom J on Pexels.com

Now, I have to say right off the bat that my father was an engineer and some of my best friends are engineers. And, I know that the engineer who designed the safety locking features of this car had our welfare in mind. I know, without a doubt, that our best interests were uppermost. He or she was thinking of the following kind of scenario. 

“Suppose this teenage couple is out parking and they get attacked by the Creature from the Black Lagoon. Wouldn’t it be cool if the doors locked just a split second after they got in. Those saved milliseconds could be crucial.”

Well, it’s a nice thought, I grant you, but first of all, teenage couples don’t bother to “park” any more. And, second, the Creature from the Black Lagoon is equally dated, not to mention dead. In the course of our two weeks in Hawaii, our car locked itself on 48 separate, unnecessary and totally annoying occasions.  

And, I wouldn’t mind so much our $100 ticket and the inconvenience at the airport if it were only misguided car locks. But, you and I both know that it isn’t just misguided car locks. No, we are beginning to be bombarded with “smart technology” that is typically really stupid. 

man in black suit sitting on chair beside buildings

Photo by Andrea Piacquadio on Pexels.com

As another case in point, as I type this manuscript, the editor or sadistitor or whatever it is tries to help me by scrolling the page up and down in a seemingly random fashion so that I am looking at the words I’m typing just HERE when quite unexpectedly and suddenly they appear HERE. (Well, I know this is hard to explain without hand gestures; you’ll have to trust me that it’s highly annoying.) This is the same “editor” or “assistant” or whatever that allowed me to center the title and author’s names. Fine. On to the second page. Well, I don’t want the rest of the document centered so I choose the icon for left justified. That seems plausible enough. So far, so good. Then, I happen to look back up to the author’s names. They are also left-justified. Why?  

Somehow, this intelligent software must have figured, “Well, hey, if the writer wants this text he’s about to type to be left-justified, I’ll just bet that he or she meant to left-justify what was just typed as well.” Thanks, but no thanks. I went back and centered the author’s names. And then inserted a page break and went to write the text of this book.  But, guess what? It’s centered. No, I don’t want the whole book centered, so I click on the icon for left-justification again. And, again, my brilliant little friend behind the scenes left-justifies the author’s names. I’m starting to wonder whether this program is named (using a hash code) for the Marquis de Sade.  

On the other hand, in places where you’d think the software might eventually “get a clue” about my intentions, it never does. For example, whenever I open up a “certain program,” it always begins as a default about 4 levels up in the hierarchy of the directory chain. It never seems to notice that I never do anything but dive 4 levels down and open up files there. Ah, well. This situation came about in the first place because somehow this machine figures that “My Computer” and “My hard-drive” are SUB-sets of “My Documents.” What?  

680174EA-5910-4F9B-8C75-C15B3136FB06_1_105_c

Did I mention another “Intelligent Agent?”…Let us just call him “Staple.” At first, “Staple” did not seem so annoying. Just a few absurd and totally out of context suggestions down in the corner of the page. But then, I guess because he felt ignored, he began to become grumpier. And, more obnoxious. Now, he’s gotten into the following habit. Whenever I begin to prepare a presentation….you have to understand the context. 

In case you haven’t noticed, American “productivity” is way up. What does that really mean? It means that fewer and fewer people are left doing the jobs that more and more people used to do. In other words, it means that whenever I am working on a presentation, I have no time for jokes. I’m not in the mood. Generally, I get e-mail insisting that I summarize a lifetime of work in 2-3 foils for an unspecified audience and an unspecified purpose but with the undertone that if I don’t do a great job, I’ll be on the bread line. A typical e-mail request might be like this:

“Classification: URGENT.

“Date: June 4th, 2002.

“Subject: Bible

“Please summarize the Bible in two foils. We need this as soon as possible but no later than June 3rd, 2002. Include business proposition, headcount, overall costs, anticipated benefits and all major technical issues. By the way, travel expenses have been limited to reimbursement for hitchhiking gear.”

Okay, I am beginning to get an inkling that the word “Urgent” has begun to get over-applied. If someone is choking to death, that is “urgent.” If a plane is about to smash into a highly populated area, that is “urgent.” If a pandemic is about to sweep the country, that is “urgent.” If some executive is trying to get a raise by showing his boss how smart he is, I’m sorry, but that might be “important” or perhaps “useful” but it is sure as heck not “urgent.”  

All right. Now, you understand that inane suggestions, in this context, are not really all that appreciated. In a different era, with a different economic climate, in an English Pub after a couple of pints of McKewan’s or McSorely’s, or Guinness, after a couple of dart games, I might be in the mood for idiotic interruptions. But not here, not now, not in this actual and extremely material world.

So, imagine my reaction to the following scenario. I’m attempting to summarize the Bible in two foils and up pops Mr. “Staple” with a question. “Do you want me to show you how to install the driver for an external projector?” Uh, no thanks. I have to admit that the first time this little annoyance appeared, I had zero temptation to drive my fist through the flat panel display. I just clicked NO and the DON’T SHOW ME THIS HINT AGAIN. And, soon I was back to the urgent job of summarizing the Bible in two foils. 

About 1.414 days later, I got another “urgent” request.

“You must fill out form AZ-78666 on-line and prepare a justification presentation (no more than 2 foils). Please do not respond to this e-mail as it was sent from a disconnected service machine. If you have any questions, please call the following [uninstalled] number: 222-111-9999.”  

Sure, I’m used to this by now. But when I open up the application, what do I see? You guessed it. A happy smiley little “Staple” with a question: 

“Do you want me to show you how to install the driver for an external projector?” 

“No,” I mutter to myself, “and I’m pretty sure we already had this conversation. I click on NO THANKS. And I DON’T WANT TO SEE THIS HINT AGAIN. (But of course, the “intelligent agent,” in its infinite wisdom, knows that secretly, it’s my life’s ambition to see this hint again and again and again).  

A friend of mine did something to my word processing program. I don’t know what. Nor does she. But now, whenever I begin a file, rather than having a large space in which to type and a small space off to the left for outlining, I have a large space for outlining and a teeny space to type. No-one has been able to figure this out. But, I’m sure that in some curious way, the software has intuited (as has the reader) that I need much more time spent on organization and less time (and space) devoted to what I actually say. (Chalk a “correct” up for the IA. As they say, “Even a blind tiger sometimes eats a poacher.” or whatever the expression is.)

Well, I shrunk the region for outlining and expanded the region for typing and guess what? You guessed it! Another intelligent agent decided to “change my font.” So, now, instead of the font I’m used to … which is still listed in the toolbar the same way, 12 point, Times New Roman … I have a font which actually looks more like 16 point. And at long last, the Intelligent Agent pops up with a question I can relate to! “Would you like me to install someone competent in the Putin misadminstration?”

What do you know? “Even a blind tiger sometimes eats a poacher.”

7B292613-361F-4989-B9AC-762AB956DECD


 

Author Page on Amazon

Start of the First Book of The Myths of the Veritas

Start of the Second Book of the Myths of the Veritas

Table of Contents for the Second Book of the Veritas

Table of Contents for Essays on America 

Index for a Pattern Language for Teamwork and Collaboration  

Essays on America: The Temperature Gauge

09 Thursday Jan 2020

Posted by petersironwood in America, apocalypse, driverless cars, politics, Uncategorized

≈ 6 Comments

Tags

AI, America, cancer, Democracy, driverless cars, ethics, government

green leafed trees

Photo by Drew Rae on Pexels.com

The sun is shining! Spring is here at last, and the trees are in bloom. You’re driving down the road and you see … 

That your “Engine over-heating” light goes on! 

You think: My engine’s over-heating! 

Or,  you think, it isn’t over-heating at all; I just have a bad sensor. 

Over the next few months, the red light goes on several other times, and each time, you pull over and try to judge whether the engine is really over-heated. No easy task. But you get back in and turn the car on and lo and behold, the light’s no longer on. Aloud, you mutter: “I’ve got to get that damned sensor fixed. Maybe next week.”

In the olden days of driving cars, I had a continuous gauge of the temperature. It was more obvious if it was acting oddly because I had more information. I could track it day to day. If I went on a long trip I could see whether the behavior of the gauge “made sense.” I might go up a long mountain road on a hot sunny day, and I expect to see the temperature gauge climb. On the other hand, if I went back down that same mountain at night and the temperature gauge climbed, I would know to get it checked. 

aerial view of road in the middle of trees

Photo by Deva Darshan on Pexels.com

Suppose instead of a gauge, you or I only get is one bit of information: “Temperature sensor says overheated,”  it’s much harder judge the veracity of the source. But, if we cannot even trust the reliability of the sensor, then we don’t even get one bit of information. Before the light comes on, there are four possible states (not equally likely, by the way, but that’s not important for the following argument). 

Engine OK, Sensor OK; 

Engine OK, Sensor ~OK; 

Engine ~OK, Sensor OK; 

Engine ~OK, Sensor ~OK. 

When the red light comes on, you have some information because the state of:

Engine OK, Sensor OK is eliminated. 

But is it? 

IMG_7209

It certainly is — under a certain set of assumptions — but let’s try to tease apart what those assumptions are and see whether they necessarily hold in today’s world, or in tomorrow’s world. 

Let’s imagine for a moment that your automobile is bewitched and inhabited by an evil demon with limited magical powers, mainly to do with the car itself. If you’ve seen the movie Christine you’ll know what I mean. If you haven’t seen it, please buy the book instead. It’s so much better. But let’s get back to our own evil-spirited car. This car, let’s call him “URUMPUT” because it sounds a bit like a car engine and because — you know, just because. Let’s imagine the car has a lot of mileage and is painted a kind of sickly orange color. The tires are bald, and it’s a real gas guzzler. It’s actually more of a jalopy than a car. Your friends would have assumed you could have done much better, but it is apparently what you’re stuck with for now. 

URUMPUT, unbeknownst to you, is actually out to kill you, but his powers are limited. He cannot simply lock the doors and reroute the exhaust till you pass out from the fumes. So, what it does is to over-ride the sensor so that you get out to take a look at your car so you open the hood and you look inside and BLAM! Down comes the hood on your head with enough force to snap your neck. When your neck is snapped, you don’t die instantaneously. You are aware that something is terribly wrong. Your brain sends signals for you to move; to get the damned hood off; but you can’t move. And, worse, you can’t breathe. Soon, but much too late, you realize something has gone terribly wrong.

You. 

Are. 

Dead! 

That blasted URUMPUT got you. Why?  Just because he could. He paid you no more mind than had you been an ant on the road. He gave you misinformation. That is information that you thought you had because you assumed you were dealing with a system that, although imperfect, had some degree of transparency. You certainly did not think you were dealing with an actively evil agent. But you were. And, now you’re dead. (But go ahead and read the rest as though you were still alive.) 

Of course, in real life, there are no bewitched cars. We all know that. 

86A389C7-4CD7-42E3-ABFA-A555A5BB24CB

Do we? 

Let’s consider how much electronics and “smarts” already exists in cars. The amount will skyrocket with driverless cars. For one thing, the human “occupants” will be able to have much more engaging entertainment. Perhaps more importantly, the “brain” of the car will be able to react to a much wider array of data more quickly than most human drivers could. 

With all the extra sensors, communications, components, functions, protocols, etc. there will be greatly enhanced functionality. 

There will also be all sorts of places where a “bad actor” might intentionally harm the vehicle or even harm the occupants. Your insurance company, for instance, might fake some of the data in the black box of your car to indicate that you drove a lot during nighttime hours. It doesn’t seem to match your recollection, but how would you double check? You grudgingly pay the increased premium. 

white graphing paper

Photo by Pixabay on Pexels.com

Behind on your loan shark payments? Oops? Your driverless car just steered itself off a cliff and all the occupants were killed. 

Oh, but how, you ask, would loan sharks get hold of the software in your car? 

Then, I have to ask you a question right back. Have you been watching the news the last couple of years? People who owe a great deal of money to the wrong people will do anything to avoid the promised punishments that follow non-payment. 

Our government at this point is definitely not much like old time cars that allowed you to see what was going on and make judgments for yourself. This government just sends out signals that say, “Everything’s Fine!” and “Do as I say!” and “Those people NOT like you? They are the cause of all your troubles.” 

D27C46AA-C37E-4AB7-8FE8-8DA937E31A91

That is not transparency. 

That is not even informational. 

That is misinformation. 

But it is not misinformation of the sort where a student says: “Akron is the capital of Ohio.” That’s wrong, but it’s not maliciously wrong. 

When people lose a limb as a result of an accident, cancer, or war, they often experience something called the “Phantom Limb Experience.” They have distinct sensations, including pain, “in” the limb that is no longer there. The engine’s not working but the sensor is also bad. 

That’s where we are. 

The engine’s not working. The feedback to us about whether it’s working is also malicious misinformation. 

We have the Phantom Limb Experience of having a government that is working for American interests. 

We need to regrow the missing limb or get a really good prosthetic. 

We need straight information from the government which is supposed to take input from all of us and then make decisions for all of us. It’s never been perfect, but this is the first time it is not even trying or pretending to be fair or even accurate. People in top level positions in our government think that their oath of office is a joke. 

We live in a monster car — and not the fun kind — the Christine kind. 

The engine’s not working. And the sensor light means nothing. If you look under the hood to find out what’s really going on, you’d better have a partner ready to grab the hood and prevent it from being slammed down on your head. Because URUMPUT would do it with as little regard for you as he would have to out and destroy any other whistleblower. 

blur close up design focus

Photo by Pixabay on Pexels.com

———————————————

The Invisibility Cloak of Habit

Author Page on Amazon

Story about Driverless Cars (from Turing’s Nightmares). 

A Once-Baked Potato

28 Saturday Sep 2019

Posted by petersironwood in America, driverless cars, politics, psychology

≈ 8 Comments

Tags

AI, automation, driverless cars, life, politics, truth

A Once-Baked Potato 

closeup photo of potatoes

Photo by Pixabay on Pexels.com

I’m really not ready to go for a long, high speed trip in a completely automated car. 

empty concrete road near trees

Photo by Alec Herrera on Pexels.com

I say that because of my baked potatoes. One for me. One for my wife. 

I’ve done it many times before. Here is my typical process. I take out a variety of vegetables to chop and chop the broccoli, red onion, garlic, red pepper while the potatoes are in the microwave. I put them in for some time like: 4:32 minutes and then, when that times out, I “test” the potatoes with a fork and put them in for more time. Actually, before I even take them out to use the “fork test” I shake the potatoes. I can tell from the “feel” whether they are still rock hard. If they are marginal, then, I use the more sensitive “fork test.”  Meanwhile, I chop more vegetables and take out the cheese. I test the potatoes again. At some point, they are well done and I slather them up with butter and cheese and then add the chopped Vegetables. 

food healthy vegetables kitchen

Photo by Pixabay on Pexels.com

Delicious. 

But today is different. 

I pushed a button on the microwave that says, “Baked Potato.” Right away, I think: “Baked potato? I’m not putting in a baked potato. I’m putting in a raw potato. You have a button labelled ‘Popcorn’ — it doesn’t say, ‘Popped Corn’ so … ? Anyway, I decided to give it a try. 

The first disadvantage I see is that I have no idea whatsoever how long this process is going to take. I assume it has to take at least four and a half minutes. When I cook it via my usual process, it’s on “high” or “full power.” So, unless the microwave has a “hidden” higher power level that it allows its internal programs to have access to but not its end users, it seems I have at least 4 1/3 minutes to chop. 

Changing the way you do things always causes a little bit of discomfort, though often, a feeling of adventure outweighs that cautionary urge. In this case, I felt a lot of discomfort. The microwave can’t feel how done the potato is so it must be using some other sensor or sensors — likely moisture — though there may be other ways to do it. How do I know that the correlation between how I measure “doneness” and how the microwave measures “doneness” is even moderate? I am also a little concerned that there are, after all, two potatoes, not just one. There was no way to tell the machine that I had two potatoes. I decided that it was likely that the technical problems had been solved. 

Why? Certainly not because I have great faith in large multinational corporations to “do what’s right” rather than do what’s expedient. Once upon a time, not so many years ago, that really was my default assumption. But no longer. Too many lies by too many corporations about too many separate topics. Once upon a time, the government held some power to hold corporations accountable for their actions. Now, the power seems to have shifted so that many politicians — too many — are beholden to their corporate owners.  

The corporation just tries to work for its self-interests. They aren’t very good at it, but that’s their goal. 

Among the common ways they fail is by being too conservative. If they are successful by doing things a certain way, they often keep at it despite changes in the technology, the markets, the cost structures, the distribution possibilities, etc. (They are too afraid to push the “Baked Potato” button). At the same time, there seems to be no evil that many of them would foreswear in order to grow their profits; no lie that is too prosperous for them to tell. 

black and grey camera

Photo by Alex Andrews on Pexels.com

Yet, I live, at least for now, in this world surrounded by products made by these companies and interacting with them all the time. I cannot trust them as a whole, but it’s almost impossible not to rely on some of them some of the time. They can’t fool all of the people all of the time. 

I do calculate that if they put these buttons on there and they were horrible, word would get around and they would lose market share. This presumes that there is real competition in the market. 

I think it likely that driverless cars will be “safer” than human drivers on average within ten years, and possibly sooner. My discomfort stems, again, partly from habit, but largely from a lack of confidence in the ethics of corporations. Normally, I would think that when it comes to life and death, at least, I can put some degree of faith in the government to oversee these companies enough to ensure their safety data were accurate. 

But I no longer believe that. And even after Trump resigns or gets impeached & convicted or he flees to Russia, there is no way to know how deeply and pervasively this corrupt misadministration has crept into the ethics of lesser government officials.  Any government official might think: “after all, if the President is flouting the Constitution by using the power of his office for his own benefit, why shouldn’t I? I need a bribe just as much as the next person and I certainly need the money more than Trump did!”

pexels-photo-164527.jpeg

Photo by Pixabay on Pexels.com

Beep. Beep. 

The microwave claims the potatoes are done. 

And so they are. Perfectly. 

There is still hope for America. 

IMG_7590

Maybe I will be able to take that ride after all. 


 

Author Page on Amazon. 

Corn on the Cob

Parametric Recipes and American Democracy 

Pies on Offer

Garlic Cloves and Puffer Fish

The Pros and Cons of AI: Part One

 

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • dogs
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 661 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...