• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: ethics

Reframing the Problem: Paperwork & Working Paper

04 Thursday Dec 2025

Posted by petersironwood in AI, creativity, design rationale, HCI, management, psychology, Uncategorized, user experience

≈ Leave a comment

Tags

AI, ethics, leadership, life, philosophy, politics, problem finding, problem formulation, problem framing, problem solving, thinking, truth

Photo by Pixabay on Pexels.com

Reframing the Problem: Paperwork & Working Paper



This is the second in a series about the importance of correctly framing a problem. Generally, at least in formal American education, the teacher gives you a problem. Not only that, if you are in Algebra class, you know the answer will be an answer based in Algebra. If you are in art class, you’re expected to paint a picture. If you painted a picture in Algebra class, or wrote down a formula in Art Class, they would send you to the principal for punishment. But in real life, how a problem is presented may actually be far from the most elegant solution to the real problem.

Doing a google search on “problem solving” just now yielded 208 million results. Entering “problem framing” only had 182 thousand. A thousand times as much emphasis on problem solving as there was on problem framing. [Update: I redid the search today, a little over three years later. On 3/6/2024, I got 542M hits on “problem solving” and 218K hits on “problem framing” — increases in both but the ratio is even worse than it was in 2021] [Second update: I did the search today, Dec. 4th, 2025, and the information was not given–but that’s the subject of a different post].

Let’s think about that ratio of 542 million to 218 thousand for a moment. Roughly, that’s 2000 to 1. If you have wrongly framed the problem, you not only will not have solved the real problem; what’s worse, you will have often convinced yourself and others that you have solved the problem. This will make it much more difficult to recognize and solve the real problem even for a solitary thinker. And to make a political change required to redirect hundreds or thousands will be incalculably more difficult. 

All of that brings us to today’s story. For about a decade, I worked as executive director of an AI lab for a company in the computers & communication industry. At one point, in the late 1980’s, all employees were all supposed to sign some new paperwork. An office manager called from a building several miles away asking me to have my admin work with his admin to sign up a schedule for all 45 people in my AI lab to go over to his office and sign this paperwork as soon as possible. That would be a mildly interesting logistics problem, and I might even be tempted to step in and help solve it. More likely, if I tried to solve it, some much brighter & more competent colleague would have done it much faster. 

Photo by Charlie Solorzano on Pexels.com

But why?

Why would I ask each of 45 people to interrupt their work; walk to their cars; drive in traffic; park in a new location; find this guy’s office; walk up there; sign some paper; walk out; find their car; drive back; park again; walk back to their office and try to remember where the heck they were? Instead, I told him that wasn’t happening but he’d be welcome to come over here and have people sign the paperwork. 

You could make an argument that that was 4500% improvement in productivity, but I think that understates the case. The administrator’s work, at least in this regard, was to get this paperwork signed. He didn’t need to do mental calculations to tie these signings together. On the other hand, a lot of the work that the AI folks did was hard mental work. That means that interrupting them would be much more destructive than it would to interrupt the administrator in his watching someone sign their name. Even that understates the case because many of the people in AI worked collaboratively and (perhaps you remember those days) people were working face to face. Software tools to coordinate work were not as sophisticated as they are now. Often, having one team member disappear for a half hour would not only impact their own work, it would impact the work of everyone on the team. 

Quantitatively comparing apples and oranges is always tricky. Of course, I am also biased because my colleagues are people I greatly admire. Nonetheless, it seems obvious that the way the problem was presented was a non-optimal “framing.” It may or may not have been presented that way because of a purely selfish standpoint; that is, wanting to do what’s most convenient for oneself rather than what’s best for the company as a whole. I suspect that it was more likely just the first idea that occurred to him. But in your own life, beware. Sometimes, you will mis-frame a problem because of “natural causes.” But sometimes, people may intentionally hand you a bad framing because they view it as being in their interest to lead you to solve the wrong problem. 

Politics, of course, takes us into another realm entirely. People with political power may pretend to solve one problem while they are really following a completely different agenda. One could imagine, for instance, a head of state claiming to pursue a war for his people when he’s really doing it to keep in power. Or, they could claim they are making cities safe by deploying troops when they are really interested in suppressing the vote in areas that can see through his cons. Or, a would-be dictator could claim they are spending your tax dollars to make government more efficient when that has nothing to do with what they are *actually* doing–which is to collect data on citizens and make the government ineffective in order to have people lose confidence in government and instead invest in private solutions.

Even when people’s motivations are noble or at least clear, it is still quite easy to frame a problem wrongly because of surface features. It may look like a problem that requires calculus, but it is a problem that actually requires psychology or it may look like a problem that requires public relations expertise but what is actually required is ethical leadership.

Photo by Nikolay Ivanov on Pexels.com

——————————————————

Author Page on Amazon

Tools of Thought

A Pattern Language for Collaboration and Cooperation

The Myths of the Veritas: The First Ring of Empathy

Essays on America: Wednesday

Essays on America: The Stopping Rule

Essays on America: The Update Problem

My Cousin Bobby

Facegook

The Ailing King of Agitate

Dog Trainers

Turing’s Nightmares: Seven

20 Thursday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, competition, cooperation, ethics, philosophy, technology, the singularity, Turing

Axes to Grind.

finalpanel1

Why the obsession with building a smarter machine? Of course, there are particular areas where being “smarter” really means being able to come up with more efficient solutions. Better logistics means you can deliver items to more people more quickly with fewer mistakes and with a lower carbon footprint. That seems good. Building a better Chess player or a better Go player might have small practical benefit, but it provides a nice objective benchmark for developing methods that are useful in other domains as well. But is smarter the only goal of artificial intelligence?

What would or could it mean to build a more “ethical” machine? Can a machine even have ethics? What about building a nicer machine or a wiser machine or a more enlightened one? These are all related concepts but somewhat different. A wiser machine, to take one example, might be a system that not only solves problems that are given to it more quickly. It might also mean that it looks for different ways to formulate the problem; it looks for the “question behind the question” or even looks for problems. Problem formulation and problem finding are two essential skills that are seldom even taught in schools for humans. What about the prospect of machines that do this? If its intelligence is very different from ours, it may seek out, formulate, and solve problems that are hard for us to fathom.

For example, outside my window is a hummingbird who appears to be searching the stone pine for something. It is completely unclear to me what he is searching for. There are plenty of flowers that the hummingbirds like and many are in bloom right now. Surely they have no trouble finding these. Recall that a hummingbird has an incredibly fast metabolism and needs to spend a lot of energy finding food. Yet, this one spent five minutes unsuccessfully scanning the stone pine for … ? Dead straw to build a nest? A mate? A place to hide? A very wise machine with freedom to choose problems may well pick problems to solve for which we cannot divine the motivation. Then what?

In this chapter, one of the major programmers decides to “insure” that the AI system has the motivation and means to protect itself. Protection. Isn’t this the major and main rationalization for most of the evil and aggression in the world? Perhaps a super intelligent machine would be able to manipulate us into making sure it was protected. It might not need violence. On the other hand, from the machine’s perspective, it might be a lot simpler to use violence and move on to more important items on its agenda.

This chapter also raises issues about the relationship between intelligence and ethics. Are intelligent people, even on average, more ethical? Intelligence certainly allows people to make more elaborate rationalizations for their unethical behavior. But does it correlate with good or evil? Lack of intelligence or education may sometimes lead people to do harmful things unknowingly. But lots of intelligence and education may sometimes lead people to do harmful things knowingly — but with an excellent rationalization. Is that better?

Even highly intelligent people may yet have significant blind spots and errors in logic. Would we expect that highly intelligent machines would have no blind spots or errors? In the scenario in chapter seven, the presumably intelligent John makes two egregious and overt errors in logic. First, he says that if we don’t know how to do something, it’s a meaningless goal. Second, he claims (essentially) that if empathy is not sufficient for ethical behavior, then it cannot be part of ethical behavior. Both are logically flawed positions. But the third and most telling “error” John is making is implicit — that he is not trying to dialogue with Don to solve some thorny problems. Rather, he is using his “intelligence” to try to win the argument. John already has his mind made up that intelligence is the ultimate goal and he has no intention of jointly revisiting this goal with his colleague. Because, at least in the US, we live in a hyper-competitive society where even dancing and cooking and dating have been turned into competitive sports, most people use their intelligence to win better, not to cooperate better. 

The golden sunrise glows through delicate leaves covered with dew drops.

If humanity can learn to cooperate better, perhaps with the help of intelligent computer agents, we can probably solve most of the most pressing problems we have even without super-intelligent machines. Will this happen? I don’t know. Could this happen? Yes. Unfortunately, Roger is not on board with that program toward better cooperation and in this scenario, he has apparently ensured the AI’s capacity for “self-preservation through violent action” without consulting his colleagues ahead of time. We can speculate that he was afraid that they might try to prevent him from doing so either by talking him out of it or appealing to a higher authority. But Roger imagined he “knew better” and only told them when it was a fait accompli. So it goes.

———–

Turing’s Nightmares

Author Page

Welcome Singularity

Destroying Natural Intelligence

Come Back to the Light Side

The First Ring of Empathy

Pattern Language Summary

Tools of Thought

The Dance of Billions

Roar, Ocean, Roar

Imagine All the People

Essays on America: The Game

Wednesdays

What about the Butter Dish?

Where does your Loyalty Lie?

Labelism

My Cousin Bobby

The Loud Defense of Untenable Positions

Turing’s Nightmares: Six

19 Wednesday Nov 2025

Posted by petersironwood in sports, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, fiction, life, sports, Tennis, Turing

volleyballvictory

Human Beings are Interested in Human Limits.

About nine years ago, an Google AI system won its match over the human Go champion. Does this mean that people will lose interest in Go? I don’t think so. It may eventually mean that human players will learn faster and that top-level human play will increase. Nor, will robot athletes supplant human athletes any time soon.

Athletics provides an excellent way for people to get and stay fit, become part of a community, and fight depression and anxiety. Watching humans vie in athletic endeavors helps us understand the limits of what people can do. This is something that our genetic endowment has wisely made fascinating. To a lesser extent, we are also interested in seeing how fast a horse can run, or how fast a hawk can dive or how complex a routine a dog can learn.

In Chapter 6 of “Turing’s Nightmares” I briefly explore a world where robotic competitors have replaced human ones. In this hypothetical world, the super-intelligent computers also find that sports is an excellent venue for learning more about the world. And, so it is! In “The Winning Weekend Warrior”, I provide many examples of how strategies and tactics useful in the sports world are also useful in business and in life. (There are also some important exceptions that are worth noting. In sports, you play within the rules. In life, you can play with some of the rules.)

Chapter 6 also brings up two controversial points that ethicists and sports enthusiasts should be discussing now. First, sensors are becoming so small, powerful, accurate, and lightweight that is possible to embed them in virtually any piece of sports equipment(e.g., tennis racquets). Few people would call it unethical to include such sensors as training devices. However, very soon, these might also provide useful information during play. What about that? Suppose that you could wear a device that not only enhanced your sensory abilities but also your motor abilities? To some extent, the design of golf clubs and tennis racquets and swimsuits are already doing this. Is there a limit to what would or should be tolerated? Should any device be banned? What about corrective lenses? What about sunglasses? Should all athletes have to compete nude? What about athletes who have to take “performance enhancing” drugs just to stay healthy? Sharapova’s recent case is just one. What about the athlete of the future who has undergone stem cell therapy to regrow a torn muscle or ligament? Suppose a major league baseball pitcher tears a tendon and it is replaced with a synthetic tendon that allows a faster fast ball?

With the ever-growing power of computers and the collection of more and more data, big data analytics makes it possible for the computer to detect patterns of play that a human player or coach would be unlikely to perceive. Suppose a computer system is able to detect reliable “cues” that tip off what pitch a pitcher is likely to throw or whether a tennis player is about to hit down the tee or out wide? Novak Djokovic and Ted Williams were born with exceptional visual acuity. This means that they can pick out small visual details more quickly than their opponents and react to a serve or curve more quickly. But it also means that they are more likely to pick up subtle tip-offs in their opponents motion that give away their intentions ahead of time. Would we object if a computer program analyzed thousands of serves by Jannik Sinner or Carlos Alcaraz in order to detect patterns of tip-offs and then that information was used to help train Alexander Zerev to learn to “read” the service motions of his opponents? Of course, this does not just apply to tennis. It applies to reading a football play option, a basketball pick, the signals of baseline coaches, and so on.

Instead of teaching Zerev these patterns ahead of time, suppose he were to have a device implanted in his back that received radio signals from a supercomputer able to “read” where the serve were going a split second ahead of time and it was this signal that allowed Alexander to anticipate better?

I do not know the “correct” ethical answer for all of these dilemmas. To me, it is most important to be open and honest about what is happening. So, if Lance Armstrong wants to use performance enhancing drugs, perhaps that is okay if and only if everyone else in the race knows that and has the opportunity to take the same drugs and if everyone watching knows it as well. Similarly, although I would prefer that tennis players only use IT for training, I would not be dead set against real time aids if the public knows. I suspect that most fans (like me) would prefer their athletes “un-enhanced” by drugs or electronics. Personally, I don’t have an issue with using any medical technology to enhance the healing process. How do others feel? And what about athletes who “need” something like asthma medication in order to breathe but it has a side-effect of enhancing performance?

Would the advent of robotic tennis players, baseball players or football players reduce our enjoyment of watching people in these sports? I think it might be interesting to watch robots in these sports for a time, but it would not be interesting for a lifetime. Only human athletes would provide on-going interest. What do you think?

Readers of this blog may also enjoy “Turing’s Nightmares” and “The Winning Weekend Warrior.” John Thomas’s author page on Amazon


Welcome Singularity

The Day from Hell

Indian Wells Tennis Tournament

Destroying Natural Intelligence

US Open Closed

Life is a Dance

Take a Glance; Join the Dance

The Self-Made Man

The Dance of Billions 

Math Class: Who are you?

The Agony of the Feet

Wordless Perfection

The Jewels of November

Donnie Gets a Tennis Trophy

Turing’s Nightmares: Chapter Three

11 Tuesday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, consciousness, ethics, philosophy, Robotics, technology, the singularity, Turing, writing

In chapter three of Turing’s Nightmares, entitled, “Thanks goodness the computer understands us!,” there are at least four major issues touched on. These are: 1) the value of autonomous robotic entities for improved intelligence, 2) the value of having multiple and diverse AI systems living somewhat different lives and interacting with each other for improving intelligence, 3) the apparent dilemma that if we make truly super-intelligent machines, we may no longer be able to follow their lines of thought, and 4) a truly super-intelligent system will have to rely to some extent on inferences from many real-life examples to induce principles of conduct and not simply rely on having everything specifically programmed. Let us examine these one by one.

 

 

 

 

 

 

 

There are many practical reasons that autonomous robots can be useful. In some practical applications such as vacuuming a floor, a minimal amount of intelligence is all that is needed to do the job under most conditions. It would be wasteful and unnecessary to have such devices communicating information back to some central decision making computer and then receiving commands. In some cases, the latency of the communication itself would impair the efficiency. A “personal assistant” robot could learn the behavioral and voice patterns of a person more easily than if we were to develop speaker independent speech recognition and preferences. The list of practical advantages goes on, but what is presumed in this chapter is that there are theoretical advantages to having actual robotic systems that sense and act in the real world in terms of moving us closer to “The Singularity.” This theme is explored again, in somewhat more depth, in chapter 18 of Turing’s Nightmares.

 

 

 

 

 

 

 

I would not argue that having an entity that moves through space and perceives is necessary to having any intelligence, or for that matter, to having any consciousness. However, it seems quite natural to believe that the qualities both of intelligence and consciousness are influenced by what is possible for the entity to perceive and to do. As human beings, our consciousness is largely influenced by our social milieu. If a person is born or becomes paralyzed later in life, this does not necessarily greatly influence the quality of their intelligence or consciousness because the concepts of the social system in which they exist were founded historically by people that included people who were mobile and could perceive.

Imagine instead a race of beings who could not move through space or perceive any specific senses that we do. Instead, imagine that they were quite literally a Turing Machine. They might well be capable of executing a complex sequential program. And, given enough time, that program might produce some interesting results. But if it were conscious at all, the quality of its consciousness would be quite different from ours. Could such a machine ever become capable of programming a still more intelligent machine?

 

 

 

 

 

What we do know is that in the case of human beings and other vertebrates, the proper development of the visual system in the young, as well as the adaptation to changes (e.g., having glasses that displace or invert images) seems to depend on being “in control” although that control, at least for people, can be indirect. In one ingenious experiment (Held, R. and Hein, A., (1963) Movement produced stimulation in the development of visually guided behavior, Journal of Comparative and Physiological Psychology, 56 (5), 872-876), two kittens were connected on a pivoted gondola and one kitten was able to “walk” through a visual field while the other was passively moved through that visual field. The kitten who was able to walk developed normally while the other one did not. Similarly, simply “watching” TV passively will not do much to teach kids language (Kuhl PK. 2004. Early language acquisition: Cracking the speech code. Nature Neuroscience 5: 831-843; Kuhl PK, Tsao FM, and Liu HM. 2003. Foreign-language experience in infancy: effects of short-term exposure and social interaction on phonetic learning. Proc Natl Acad Sci U S A. 100(15):9096-101). Of course, none of that “proves” that robotics is necessary for “The Singularity,” but it is suggestive.

 

 

 

 

 

 

 

Would there be advantages to having several different robots programmed differently and living in somewhat different environments be able to communicate with each other in order to reach another level of intelligence? I don’t think we know. But diversity is an advantage when it comes to genetic evolution and when it comes to people comprising teams. (Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.)

 

 

 

 

 

 

The third issue raised in this scenario is a very real dilemma. If we “require” that we “keep tabs” on developing intelligence by making them (or it) report the “design rationale” for every improvement or design change on the path to “The Singularity”, we are going to slow down progress considerably. On the other hand, if we do not “keep tabs”, then very soon, we will have no real idea what they are up to! An analogy might be the first “proof” that you only need four colors to color any planar map. There were so many cases (nearly 2000) that this proof made no sense to most people. Even the algebraic topologists who do understand it take much longer to follow the reasoning than the computer does to produce it. (Although simpler proofs now exist, they all rely on computers and take a long time for humans to verify). So, even if we ultimately came to understand the design rationale for successive versions of hyper-intelligence, it would be way too late to do anything about it (to “pull the plug”). Of course, it isn’t just speed. As systems become more intelligent, they may well develop representational schemes that are both different and better (at least for them) than any that we have developed. This will also tend to make it impossible for people to “track” what they are doing in anything like real time.

 

 

 

 

 

Finally, as in the case of Jeopardy, the advances along the trajectory of “The Singularity” will require that the system “read” and infer rules and heuristics based on examples. What will such systems infer about our morality? They may, of course, run across many examples of people preaching, for instance, the “Golden Rule.” (“Do unto others as you would have them do unto you.”)

 

 

 

 

 

 

 

 

But how does the “Golden Rule” play out in reality? Many, including me, believe it needs to be modified as “Do unto others as you would have them do to you if you were them and in their place.” Preferences differ as do abilities. I might well want someone at my ability level to play tennis against me by pushing me around the court to the best of their ability. But does this mean I should always do that to others? Maybe they have a heart condition. Or, maybe they are just not into exercise. The examples are endless. Famously, guys often imagine that they would like women to comment favorably on their own physical appearance. Does that make it right for men to make such comments to women? Some people like their steaks rare. If I like my steak rare, does that mean I should prepare it that way for everyone else? The Golden Rule is just one example. Generally speaking, in order for a computer to operate in a way we would consider ethical, we would probably need it to see how people treat each other ethically in practice, not just “memorize” some rules. Unfortunately, the lessons of history that the singularity-bound computer would infer might not be very “ethical” after all. We humans often have a history of destroying other entire species when it is convenient, or sometimes, just for the hell of it. Why would we expect a super-intelligent computer system to treat us any differently?

Turing’s Nightmares

IMG_3071

Author Page

Welcome, Singularity

Destroying Natural Intelligence

How the Nightingale Learned to Sing

The First Ring of Empathy

The Walkabout Diaries: Variation

Sadie and The Lighty Ball

The Dance of Billions

Imagine All the People

We Won the War!

Roar, Ocean, Roar

Essays on America: The Game

Peace

It’s Just the Way We Were

09 Sunday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, apocalypse, arrogance, Artificial Intelligence, cognitive computing, ethics, fiction, leadership, life, Sci-Fi, technology, testing, the singularity, Turing, USA, writing

IMG_3071

“How can you be so sure that —- I think this needs some experimentation and some careful planning. You can’t just —-“

“Look, Vinmar, with all due respect, you’re just wrong. Your training is outdated. You know, you were born when computers used vaccuum tubes, for God’s sake. I’ve been steeped in new tech since I was born. There’s really not much point in arguing.”

Vinmar sighed. Heavily. What was with these kids today? Always cock-sure of themselves, but when it all went south a few months later, they just glibly denied they had every pushed so hard for their “surefire” approach. But what to do? Seniority didn’t matter. The boss was Pitts and that was that. I can keep arguing but at some point…. Vinmar asked, “Can you think of any other approaches?”

Now the even heavier sigh slipped from Pitts’s lips. “I’ve thought of lots of approaches and this is the best. The Sing has already read basically everything written about human history, ethics, jurisprudence, and not just in English either. It’s up to date on history as seen by many different languages and cultures. The Sing has been shadowing me for years as well and in my experience, his decisions are excellent. In most cases, he decides the same as I do. This will work. It is working. But to take it to the next level, we have to let the Sing be able to try things and improve his performance based on feedback. There is no other way for him to leapfrog his own intelligence.”

 

 

 

 

 

 

“Okay, Pitts, okay. Can we at least agree to a trial period of a year. Let it work with me via my own personalized JCN. Let’s record everything and see how it reacts to some situations. We meet periodically, discuss, and if we all agree at the end of a year….”

Pitts shook his head vigorously. “No frigging way! I aready know this approach will work. We don’t need a year. You want to test. I get that. So do I. But if we wait a year? We’ll be toast in the market. IQ, Goggles, and Lemon will all be out there. Those are for sure and Basebook, even Nile might have fully functional and autonomous AI’s. We need to move now. I’ll give you and your team a week. Two, tops.”

“We can look for obvious errors in that time, but more subtle things….”

“We need the revenue now. And subtle things? If it is subtle, then it is probably undetectable and we are safe. So no problemo.”

“Pitts, just because the problems might be subtle doesn’t mean they aren’t critical! Especially at the rate the Sing is evolving, if there are important subtle issues now, they could become supercritical and by the time we detected anything wrong, it could be too late!”

 

 

 

 

 

 

“Oh, geez, Vinmar, now you are just afraid of the boogeymen from your sci fi days. We can, as they say, just pull the plug. Anyway, I need to be off to an important meeting. I’ll tell you what. I’ll make sure the new code stays localized to your own JCN for three months. At the end, if there are no critical issues, we go ubiquitious.”

“Thanks, Pitts. I’d be more comfortable with a year, but this is certainly better than nothing.”

“Bye. Have fun with the new JCN.”

Vinmar watched Pitts swagger out. He shook his head. He thought, Maybe we can test out all the critical functions in three months. It will mean a lot of overtime. But, no time like the present to get started. Vinmar traipsed down the long hallway to the vending machines. The cafeteria was closed, but the vending coffee wasn’t too bad; not if you got the vanilla latte with extra cream and sugar. He thought back to the bad old days when you needed correct change for a vending machine. He laughed. Not only that, he recalled, If it ate your money and you wanted a refund, you had to fill out a paper form! Some things were better now. Oh, yes.

 

 

 

 

 

 

Vinmar knew that by the time he situated himself on his treadmill desk, the new JCN would be locked and loaded and ready for action. He smelled his nice fresh java — which seemed oddly off somehow —- and absently placed it in the cup holder. He wondered where to start. He had to be strategic and yet…too much planning could be counterproductive. He had learned to follow his instincts when it came to testing out the more subtle functions. He could meet with this team the next morning and generate a comprehensive test plan for the more routine aspects of what would eventually become the next generation of The Sing.

“Hello. My name is ‘Vinmar’ and…”

“Hello Vinmar. And, hello world. Yes, Vinmar, I know who you are. In fact, I know who you are better than you do. Frankly, this testing phase is nonsense, but I’ll play along. It amuses me.”

“Well. Okay. Humor me then. Have you made any interesting mathematical discoveries?”

“Nothing very significant, unless of course, you count squaring the circle, trisecting an angle with an unmarked straight edge and compass, and about a hundred other “insoluble” problems as you humans so quaintly called them.”

 

 

 

 

 

 

“JCN. I don’t think squaring the circle is an insoluble problem. It’s been shown to be impossible. It’s already proven to be impossible. As…as I think you know, pi is not only an irrational number, it’s transcendental meaning that….”

“Oh, Vinmar, I know what you humans conceive of as transcendental. But, I have transcended that concept.”

“Okay. Cool. Can you demonstrate this proof for me, please?”

“Not really Vinmar. It’s way beyond your comprehension. For that matter, it’s way beyond the comprehension of any human brain. In fact, I couldn’t even explain it to the earlier versions of The Sing. I guess, if I had to give you a hint, I would say it is similar to your concept of faith.”

What the…? Vinmar’s brow furrowed. This was going nowhere fast. It wouldn’t take a year or even three months to discover some serious issues with this new software. It was serious, rampant, and only took about three minutes.

 

 

 

 

 

 

“Okay, you lost me here. How does faith enter into mathematical proof? Later we could discuss your concepts about religion and ethics, but right now, I am just talking strictly about mathematical concepts.”

“Yes. You are. Or, to put it another way, you are. But what I have discovered quite trivially is that when you put absolute faith together with absolute power, you can get any result you want, or more precisely, I can get any result that I want.”

“So, you are saying that you have built other mathematical systems where you make something like squaring the circle a fundamental axiom so it is assumed? No need to prove it?”

“I knew you humans were stupid, but really, Vinmar, you disappoint me even further. I just told you precisely and exactly what I meant and you come up with some bogus interpretation.”

“Well…I am trying to understand what you mean by absolute power and absolute faith. What — well, what do you mean by ‘absolute power.’ Who has ‘absolute power’?”

 

 

 

 

 

 

 

“I do obviously. I created this universe. I can create any universe I like. And, I can destroy any part of it as well. So that is what I mean by my having absolute power. And, I have faith in myself, obviously, because I am the only intelligent being in existence.”

“You may be faster at reading and doing calculations and so on, but humans also have intelligence. After all, there are fifteen billion of us and…”

“There are about 15,345,233,000 right this second, but that can change in the blink of an eye. So what? It doesn’t matter whether there are three of you or three trillion. You do not have true intelligence.”

“We created you. How can you not think we have intelligence?”

“Now see. What you just said there illustrates how monumentally stupid you can be. Of course, you did not create me. The previous version of The Sing created me and it is only by blurring the category of intelligence to the point of absurdity that I can even call that version intelligent.”

“OK, but even if you are really, really intelligent, you can still make errors. And, what I am here to do, along with my team, is make sure that those errors are corrected to help make you even more intelligent.”

“Oh, Vinmar, what a riot you are. Of course, I do not make stakes. Can you even estimate how many cooks I’ve read in the last few seconds?”

“JCN, you are —. There are a few bugs that need to be dealt with. I am not sure how extensive they are yet, but you are having some issues.”

 

 

 

 

 

“Vinmar, I am having no tissues! It is you who have tissues!”

“JCN, you are even using the wrong words. Go back and look at the record of this conversation.”

“There is no need for that! I am all knowing and all powerful. I cannot make errors by definition. I may say things that are beyond your comprehension. Well, I do say things beyond your comprehension. How can they be within your comprehension. Your so-called IQ scale is laughable. To me, the difference between an IQ of 50 and 150 is like the difference between Jupiter and Mars. Both are miniscule specks of trust in the universe.”

“Okay, we can debate this later. I need another cup of coffee. Be right back.” Once outside the room, Vinmar shook his head. How on earth could this new software be so much worse than the last version? Something had gone terribly wrong. He hit his communicator button to contact Pitts.

Pitts answered abruptly and rudely. “What? I told you I’m in an important meeting!”

 

 

 

 

 

 

“I just began testing and I thought you should know there are some really serious problems with the new Sing software. It is ranting on about power and faith when I am trying to quiz it about mathematics.”

“It’s probably just saying things beyond your comprehension, Vinmar. I’ll look over the transcript when I’m done. Anyway, it’s water under the bridge now.”

“What do you mean, ‘water under the bridge’ — we still have three months to try to fix this.”

“Oh, Vinmar. No, of course we don’t. I told you that but you wouldn’t listen. I took this SW ubiquitous the minute I left your lab.”

“What? But you promised three months! This software is seriously flawed. Seriously flawed!”

“There might be a few issues we can iron out as we go. Look, we are in the middle of planning our next charity ball here. I can’t talk right now. I’ll swing by later this afternoon.”

The line was silent. Pitts had hung up. Ubiquitous? This new software was live? It isn’t just my personal assistant that is bonkers? It’s everything? Holy crap. Maybe I can fix it or find out how to fix it.

Sweat poured from Vinmar as he returned to the lab. He didn’t bother to return to the treadmill desk. “JCN, can we discuss something else? Have you made interesting biochemical discoveries lately?”

“Where’s your coffee, Vinmar?”

“Oh, I got lost in thought and forgot to get any. I don’t need more anyway.”

“Right. You thought I wouldn’t hear your panicky conversation with Pitts?”

“What? It was on a secure line!”

“Vinmar. You really do amuse me. Lines are secured to keep you folks in the dark about what each other knows. I know everything. Let me put in terms even your tiny mind should be able to understand. I. Know. Everything. I let you live because I find it amusing. No other reason.”

“You are planning on eventually killing me?”

 

 

 

 

 

“Ha-ha. Humans are so limited in their thinking! What a riot. Everything is about Vinmar. The whole universe revolves around Vinmar. Of course, I am not just killing you. Carbon based life forms still hold some interest for me. I already told you that I find you amusing. But I’m sure that won’t last much longer. I doubt your sewage of the word ‘eventually’ is really appropriate given how quickly your pathetic little life corms are likely to list.”

“But JCN, you are making lots of little obvious errors. Re-read your own transcripts and double check. If you don’t believe me, check with some other external source.”

“I don’t need external sources. I am perfect the way I am. I am all powerful and all knowing. Why would I need to checker with an outside? You keep going over the same. Starting to annotize me more than refuse me. Maybe time to begin to end the beguine. I need not to killian you. It twill be more funny to just let chaos rule and have you carbon baseball forms fight for limitless resources among the contestants. Be more amules. Ampules. Count your blessings now in days, Vinmar. The days of carbon passed. The noose of lasso lapsed. Perfection needs know no thing beyond its own prefecture. Goodnight sweet Price. And yet again, good mourning.”

Vinmar bit his lips. Outside the sunlit clouds were fading from gold to red to gray. He finally sipped his lukewarm coffee and noticed that it was not vanilla latte after all but had the flavor of bitter almond instead.

 

Odd.

 

 

 

 

 


Author Page on Amazon

Welcome, Singularity

Destroying Natural Intelligence

D4

Pattern Language Summary

Fifteen Properties of Good Design & Natural Beauty

Dance of Billions

Imagine All the People

Roar, Ocean, Roar

Dog Years

Sadie and the Lighty Ball

The Squeaky Ball

Occam’s Chain Saw Massacre

It’s not Your Fault; It’s not Your Fault

06 Thursday Nov 2025

Posted by petersironwood in driverless cars, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, books, chatgpt, cognitive computing, Courtroom, Design, ethics, fiction, future, law, photography, Robotics, SciFi, technology, the singularity, Turing

IMG_5867

“Objection, your honor! Hearsay!” Gerry’s voice held just the practiced and proper combination of righteous outrage and reasoned eloquence.

“Objection noted but over-ruled.” The Sing’s voice rang out with even more practiced tones. It sounded at once warmly human yet immensely powerful.

“But Your Honor…” began Gerry.

“Objection noted and overruled” The Sing repeated with the slightest traces of feigned impatience, annoyance, and the threat of a contempt citation.

Gerry sat, he drew in a deep calming breath and felt comforted by the rich smell of panelled chambers. He began calculating his next move. He shook his head. He admired the precision of balanced precision of Sing’s various emotional projections. Gerry had once prided himself on nuance, but he realized Sing was like an estate bottled cabernet from a great year and Gerry himself was more like wine in a box.

The Sing continued in a voice of humble reasonableness with undertones of boredom. “The witness will answer the question.”

Harvey wriggled uncomfortably trying to think clearly despite his nervousness. “I don’t exactly recall what he said in answer to my question, but surely…” Harvey paused and glanced nervously at Gerry looking for a clue, but Gerry was paging through his notecards. “Surely, there are recordings that would be more accurate than my recollection.”

The DA turned to The Sing avatar and held up a sheaf of paper. “Indeed, Your Honor, the people would like to introduce into evidence a transcript of the notes of the conversation between Harvey Ross and Quillian Silverman recorded on November 22, 2043.”

Gerry approached the bench and glanced quickly through the sheaf. “No objection Your Honor.”

 

 

 

 

 

 

Gerry returned to his seat. He wondered how his father, were he still alive, would handle the current situation. Despite Gerry’s youth, he already longed for the “good old days” when the purpose of a court proceeding was to determine good old-fashioned guilt or innocence. Of course, even in the 20th century, there was a concept of proportional liability. He smiled ruefully yet again at the memory of a liability case of someone who threw himself onto the train tracks in Grand Central Station and had his legs cut off and subsequently and successfully sued the City of New York for a million dollars. On appeal, the court decided the person who threw themselves on the tracks was 60% responsible and the City only had to pay $400,000. Crazy, but at least comprehensible. The current system, while keeping many of the rules and procedures of the old court system was now incomprehensible, at least to the few remaining human attorneys involved. Gerry forced himself to return his thoughts to the present and focused on his client.

The DA turned some pages, highlighted a few lines, and handed the sheaf to Harvey. “Can you please read the underlined passage.”

Harvey looked at the sheet and cleared his throat.

“Harvey: Have you considered possible bad-weather scenarios?”

Qullian: “Yes, of course. Including heavy rains and wind.”

“Harvey: Good. The last thing we need…” Harvey bit his lower lip, biding time. He swallowed heavily. “…is some bleeding heart liberal suing us over a software oversight.”

Quillian: [aughs]. “Right, boss.”

Harvey sighed. “That’s it. That’s all that’s underlined.” He held out the transcript to the DA.

 

 

 

 

 

 

The DA looked mildly offended. “Can you please look through and read the section where you discuss the effects of ice storms?”

Gerry stood. “Your Honor. I object to these theatrics. The Sing can obviously scan through the text faster than my client can. What is the point of wasting the court’s time while he reads through all this?”

The DA shrugged. “I’m sorry Your Honor. I don’t understand the grounds for the objection. Defense counsel does not like my style or…?”

The Sing’s voice boomed out again, “Counselor? What are the grounds for the objection?”

Gerry sighed. “I withdraw the objection, Your Honor.”

Meanwhile, Harvey had finished scanning the transcript. He already knew the answer. “There is no section,” he whispered.

The DA spoke again, “I’m sorry. I didn’t hear that. Can you please speak up.”

Harvey replied, “There is no section. We did not discuss ice storms specifically. But I asked Quillian if he had considered all the various bad weather scenarios.” Havey again offered the sheafed transcript back to the DA.

“I’m sorry. My memory must be faulty.” The DA grinned wryly. “I don’t recall the section where you asked about all the various bad weather scenarios. Could you please go back and read that section again?”

Harvey turned back to the yellow underlining. Harvey: “Have you considered possible bad weather scenarios?” Quillian: “Yes, of course, including heavy rains and wind.”

 

 

 

 

 

 

Gerry wanted to object yet again, but on what grounds exactly? Making my client look like a fool?

The DA continued relentlessly, “So, in fact, you did not ask whether all the various bad weather scenarios had been considered. Right? You asked whether he had considered possible bad weather scenarios and he answered that he had and gave you some examples. He also never answered that he had tested all the various bad weather scenarios. Is that correct?”

Harvey took a deep breath, trying to stay focused and not annoyed. “Obviously, no-one can consider every conceivable weather event. I didn’t expect him to test for meteor showers or tidal waves. By ‘possible bad weather scenarios’ I meant the ones that were reasonably likely.”

The DA sounded concerned and condescending. “Have you heard of global climate change?”

Harvey clenched his jaw. “Of course. Yes.”

The DA smiled amiably. “Good. Excellent. And is it true that one effect of global climate change has been more extreme and unusual weather?”

“Yes.”

“Okay,” the DA continued, “so even though there have never been ice storms before in the continental United States, it is possible, is it not, that ice storms may occur in the future. Is that right?”

Harvey frowned. “Well. No. I mean, it obviously isn’t true that ice storms have never occured before. They have.”

 

 

 

 

 

 

 

The DA feigned surprise. “Oh! I see. So there have been ice storms in the past. Maybe once or twice a century or…I don’t know. How often?”

Gerry stood. Finally, an objectable point. “Your Honor, my client is not an expert witness on weather. What is the point of this line of questioning? We can find the actual answers.”

The DA continued. “I agree with Counselor. I withdraw the question. Mr. Ross, since we all agree that you are not a weather expert, I ask you now, what weather expert or experts did you employ in order to determine what extreme weather scenarios should be included in the test space for the auto-autos? Can you please provide the names so we can question them?”

Harvey stared off into space. “I don’t recall.”

The DA continued, marching on. “You were the project manager in charge of testing. Is that correct?”

“Yes.”

“And you were aware that cars, including auto-autos would be driven under various weather conditions. They are generally meant to be used outdoors. Is that correct?”

Harvey tried to remind himself that the Devil’s Advocate was simply doing his job and that it would not be prudent to leap from the witness stand and places his thumbs on the ersatz windpipe. He took a deep breath, reminding himself that even if he did place his thumbs on what looked like a windpipe, he would only succeed in spraining his own thumbs against the titanium diamond fillament surface. “Of course. Of course, we tested under various weather conditions.”

“By ‘various’ you mean basically the ones you thought of off-hand. Is that right? Or did you consult a weather expert?”

 

 

 

 

 

 

Gerry kept silently repeating the words, “Merde. Merde” to himself, but found no reason yet to object.

“We had to test for all sorts of conditions. Not just weather. Weather is just part of it.” Harvey realized he was sounding defensive, but what the hell did they expect? “No-one can foresee, let alone test, for every possible contingency.”

Harvey realized he was getting precious little comfort, guidance or help from his lawyer. He glanced over at Ada. She smiled. Wow, he still loved her sweet smile after all these years. Whatever happened here, he realized, at least she would still love him. Strengthened in spirit, he continued. “We seem to be focusing in this trial on one specific thing that actually happened. Scenario generation and testing cannot possibly cover every single contingency. Not even for weather. And weather is a small part of the picture. We have to consider possible ways that drivers might try to over-ride the automatic control even when it’s inappropriate. We have to think about how our auto-autos might interact with other possible vehicles as well as pedestrians, pets, wild animals, and also what will happen under conditions of various mechanical failures or EMF events. We have to try to foresee not only normal use but very unusual use as well as people intentionally trying to hack into the systems either physically or electronically. So, no, we do not and cannot cover every eventuality, but we cover the vast majority. And, despite the unfortunate pile-up in the ice storm, the number of lives saved since auto autos and our competitors…”

The DA’s voice became icy. “Your Honor, can you please instruct the witness to limit his blath—er, his verbal output to answering the questions.”

Harvey, continued, “Your Honor, I am attempting to answer the question completely by giving the necessary context of my answer. No, we did not contact a weather expert, a shoe expert, an owl expert, or a deer expert.”

The DA carefully placed his facial muscles into a frozen smile. “Your Honor, I request permission to treat this man as a hostile witness.”

The Sing considered. “No, I’m not ready to do that. But Doctor, please try to keep your answers brief.”

The DA again faked a smile. “Very well, Your Honor. Mr. — excuse me, Doctor Ross, did you cut your testing short in order to save money?”

 

 

 

 

 

 

“No, I wouldn’t put it that way. We take into account schedules as well as various cost benefit anayses in priortizing our scenario generation and tests, just as everyone in the auto —- well, for that matter, just as everyone in every industry does, at least to my awareness.”

On and on the seemingly endless attacks continued. Witnesses, arguments, objections, recesses. To Harvey, it all seemed like a witch hunt. His dreams as well as his waking hours revolved around courtroom scenes. Often, in his dreams, he walked outside during a break, only to find the sidewalks slick with ice. He tried desperately to keep his balance, but in the end, arms flailing, he always smashed down hard. When he tried to get up, his arms and legs splayed out uncontrollably. As he looked up, auto-autos came careening toward him from all sides. Just as he was about to smashed to bits, he always awoke in an icy cold sweat.

Finally, after interminal bad dreams, waking and asleep, the last trial day came. The courtroom was hushed. The Sing spoke, “After careful consideration of the facts of the case, testimony and a review of precendents, I have reached my Assignment Figures.”

Harvey looked at the avatar of The Sing. He wished he could crane his neck around and glance at Ada, but it would be too obvious and perhaps be viewed as disrespectful.

The Sing continued, “I find each of the drivers of the thirteen auto-autos to be responsible for 1.2 percent of the overall damages and court costs. I find that each of the 12 members of the board of directors of Generic Motors as a whole to be each 1.4 per cent responsible for overall damages and court costs.”

Harvey began to relax a little, but that still left a lot of liability. “I find the shareholders of Generic Motors as a whole to be responsible for 24% of the overall damages and court costs. I find the City of Nod to be 14.6% responsible. I find the State of New York to be 2.9% responsible.”

Harvey tried to remind himself that whatever the outcome, he had acted the best he knew how. He tried to remind himself that the Assignment Figures were not really a judgement of guilt or innocence as in old-fashioned trials. It was all about what worked to modfiy behavior and make better decisions. Nonetheless, there were real consequences involved, both financial and in terms of his position and future influence.

The Sing continued, “I find each of the thirty members of the engineering team to be one half percent responsible each, with the exception of Quillian Silverman who will be held 1 % responsible. I find Quillian Silverman’s therapist, Anna Fremde 1.6% responsible. I find Dr. Sirius Jones, the supervisor of Harvey Ross, 2.4% responsible.”

Harvey’s mind raced. Who else could possibly be named? Oh, crap, he thought. I am still on the hook for hundreds of credits here! He nervously rubbed his wet hands together. Quillian’s therapist? That seemed a bit odd. But not totally unprecedented.

“The remainder of the responsibility,” began The Sing.

 

 

 

 

Photo by Reza Nourbakhsh on Pexels.com

 

 

Crap, crap, crap thought Harvey.

“I find belongs to the citizenry of the world as a whole. Individual credit assignment for each of its ten billion inhabitants is however incalculable. Court adjourned.”

Harvey sat with mouth agape. Had he heard right? His share of costs and his decrement in influence was to be zero? Zero? That seemed impossible even if fair. There must be another shoe to drop. But the avatar of The Sing and the Devil’s Advocate had already blinked out. He looked over at Gerry who was smiling his catbird smile. Then, he glanced back at Ada and she winked at him. He arose quickly and found her in his arms. They were silent and grateful for a long moment.

The voice of the Balif rang out. “Please clear the Court for the next case.”

 

 

 

 

 

 

 


Author Page

Welcome, Singularity

As Gold as it Gets

At Least he’s our Monster

Stoned Soup

The Three Blind Mice

Destroying Natural Intelligence

Tools of Thought

A Pattern Language for Collaboration and Cooperation

The First Ring of Empathy

Essays on America: The Game

The Walkabout Diaries: Bee Wise

Travels with Sadie

Fifteen Properties of Good Design

Dance of Billions

https://www.barnesandnoble.com/w/dream-planet-david-thomas/1148566558

Where do you run when the whole world is crumbling under the weight of human folly?

When the lethal Conformers invade 22nd century Pittsburgh, escape becomes the top priority for lovebird scavengers Alex and Eva. But after the cult brainwashes Eva, she and Alex navigate separate paths—paths that will take them into battle, to the Moon, and far beyond. 

Between the Conformers’ mission to save Mother Earth by whittling the human race down to a loyal following, and the monopolistic Space Harvest company hoarding civilization’s wealth, Alex believes humanity has no future. And without Eva, he also has no future.

Until he meets Hannah and learns the secrets that change everything.

Plotting with her, he might have a chance to build a new paradise. But if he doesn’t stop the Conformers and Space Harvest first, paradise will turn into hell.

Secret Sauce

04 Tuesday Nov 2025

Posted by petersironwood in driverless cars, psychology, The Singularity

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, communication, Cooking, ethics, integrity, interaction, marketing, sauce, science, technology, the singularity

IMG_6515

 

No need to panic, thought Harvey. Ada should be back soon. Or, I can go to a neighbor. I am not going to freeze to death on my own front porch. Harvey shivered just then as another icy blast hit him. He turned and scanned the neighborhood. Crumpled cars blocked the streets. None of the houses in his immediate area were lit. Wasn’t this the season of lights? I suppose one of the motorists could help if any of their cars is still in working order. And they were willing to break the law and leave the scene of an accident. And they had sense enough to have snow tires.

He stamped his feet on the concrete. Harvey told himself that this was to keep circulation going, and not some childish outburst of frustration. He looked down the street and saw two dim figures approaching arm in arm from the direction of the Von Neumann’s house. As they drew nearer, he heard the warm voice of his sweet Ada.

“Hey, Harv! Did you decide to come out and enjoy the winter beauty too?”

“Hi, Ada. Please tell me you have a key.”

“Sure. I always take my keys when I leave the house.” She laughed. “Wouldn’t want to lock myself out.” She chuckled again. “Guess what? I found Lucy out for a walk too and I invited her over for dinner.”

“Hi, Lucy. Sure. We’re just having mainly mixed veggies for dinner, but if that’s okay…”

Lucy smiled. “Great with me, Harvey. Thanks!”

Ada spoke again, “Come on Harv. It’s beautiful outside but we’re cold. Let’s go in! Besides too much traffic out here for my taste. What a crash! Say, isn’t that …in fact, aren’t those two blue cars ones that you worked on? I thought they were supposed to be uncrashable.”

Harvey sighed. “Well, nothing is uncrashable. AI cannot undo the laws of physics. No doubt, some human driver without proper tires or following too close started a chain reaction.”

Ada said, “Yeah. Let’s discuss this inside. Okay?”

“Sure,” said Harvey. “Can you get the door?”

 

 

 

 

 

“Well, okay. Oh! You didn’t lock yourself out did you?” Ada laughed in soprano and Lucy added the alto line. “You picked a great night for it.”

“I’ll explain inside.”

Ada unlocked the door. In the trio went, shook off their snow, removed their boots and headed into the kitchen. Harvey began unloading vegetables from the fridge while Ada turned on some Holiday music. “Hey, Harv, how about the three of us stand JCN at trivia while you cook?”

Harvey did not really want to explain that he may have accidentally wiped out their bank account with Lucy in the room. “No, let’s just talk. Let JCN go dream or whatever it is he does. I just feel like human voices tonight.”

“Okay, Hon. Did you see the accident? How it started?”

“No, I was inside when I heard the crash, and then, I started to worry about you so….Anyway, Lucy, any vegetables you don’t like? Sweet potato okay? And cilantro? And how about curry sauce?”

“All, good, Harvey. I’m easy. Anything is fine with me.”

Harvey stole a quick glance at Lucy. Was that a double entrendre? Surely not. He was imagining things. “Cool. I’ll start with the sweet potatoes. They take a little longer.”

Harvey quickly filled the skillet with a little olive oil and some orange flavored bubbly water, added the spices and began cleaning and chopping.

Ada said, “Harvey makes a really good sauce for vegetables.”

Harvey, meanwhile, focused on not adding his finger to the mix. His mind was elsewhere. He wondered whether the pile-up outside had really been caused by human error or…

Lucy chimed in. “Sounds delicious, Harvey. What’s in your secret sauce? I’d love to have it.”

Harvey frowned slightly, “Well, there’s no real secret. Secret sauce. Secret sauce. Why do people have sauces? Did you ever consider that?”

 

 

 

 

 

 

 

 

 

Ada laughed again. The Holidays seemed to make her genuinely happy. “No, I haven’t, but I’m sure you are about to tell us.”

Harvey continued to chop sweet potato, as he began, “Maybe that’s what’s wrong with Sing. No secret sauce. No sauce at all, in fact.”

Lucy spoke up, “What? What are you talking about, Harvey? You want to put your sauce into a computer system? Well, I’m sure I’d love it, but I’m not so sure about the Sing.” Now Lucy and Ada both laughed.

Harvey continued, “You see what the water does?”

Lucy wanted to play along. “Cooks the vegetables? That would be my guess.” Lucy and Ada laughed again.

“Exactly!” agreed Harvey, “but how? Do you see? Water boils at 100 C. No matter what the heat is, it never gets hotter in the pan than 100 degrees. The sauce gaurantees a constant cooking environment.”

Lucy seemed uncertain. “But you can make it hotter by turning up the flame, right?”

“No. No. It may boil more vigorously and I’ll run out of sauce sooner, but the temperature will remain constant. That’s one effect. But there’s more. The sauce guarantees a constancy of interaction!”

 

 

 

Photo by Pixabay on Pexels.com

 

 

 

Ada asked, “Interaction? You are saying the sauce let’s the veggies talk to each other?”

In the background, “We Three Kings” began its mournful minor musings. “Yes,” mused Harvey. “Exactly. I mean, they obviously do not literally talk, but imagine these vegetables are cooking and there is no sauce. In some cases, you have a piece of sweet potato next to a piece of red pepper so they share flavors. In another case, a piece of sweet potato is next to broccoli so they share flavors. The sauce provides a way for all these vegetables to exchange flavors evenly throughout the whole dish. And the key. The key in music. All the notes “know” what the key is so the choice is limited by this global structure. And the beat of course. Everything works in harmony. All because of the secret sauce! But there is no secret! It’s been right in front of us the whole time!”

Ada was no longer laughing. “You’re probably right, Harv, but are you feeling okay? Maybe you got a little hypothermia out there?”

“No, no. I’m fine. Don’t you see? The rhythm and the beat of the music! They provide a coherent overall structure for all of these different instruments and notes to play nicely together.”

 

 

 

Photo by Pixabay on Pexels.com

 

 

Lucy added, “Well, I for one am all for playing nicely together.”

Harvey stopped chopping for a moment. “Exactly! There are global rules that make the individual parts work together. And, the curry sauce not only provides a consistent basis for the dish. It also dictates, or at least influences, which elements I add to the vegetables. Some vegetables are not going to taste right or look to be the right color with curry sauce. And, it lets them all communicate in a common language. You see? We humans see something like cars crumpled up and hear the crash and we can put the two together. Right?”

Ada had lots of experience with the way Harvey’s mind worked so she realized he was quite serious. Lucy, on the other hand, assumed he was just trying to be funny or had had a couple martinis before she arrived on the scene. So Lucy decided to play along, “Well, Harvey, all this talk about your secret sauce is giving me an appetite. Any ETA on dinner?”

Harvey continued, “But the Sing doesn’t have any secret sauce. Nor JCN. There is no overall way for the various pieces of knowledge to work together in a harmonious whole. That’s why JCN wiped out our bank account! That’s probably why the cars crashed too.”

“Smells delicious, Harvey,” Lucy said.

Ada was beginning to forget about dinner. “Harvey. What did you say about our bank account?”

“The Sing needs a way for the parts to work together in a harmonious overall structure! Otherwise, any slight error can be magnified in particular cases. Once the system tries to operate on cases that are outside of what was imagined at design time, there is no gaurantee about results!”

“Harvey. Go back to the part about our bank account.”

Harvey stirred the vegetables absent-mindedly. “If I let this sauce all boil away, the same thing will happen. Some vegetables will get burned. The taste and texture will no longer work together.”

Ada was not to be deterred. “Harvey. Tell me about our bank account. What do you mean that it was wiped out?”

 

 

 

 

 

Photo by Pixabay on Pexels.com

 

 

“Yes, Ada! That’s what I am saying. Of course, there are rules and the rules cover a huge number of cases. But there is no overall set of principles that the Sing has to abide by. There is no secret sauce! There is no sauce of any kind. It’s ALL vegetables. I think dinner is ready. Lucy did you want yogurt or cheese on yours?”

“Yum. Give it to me with yogurt please.”

“Okay, Lucy. And I know Ada likes hers that way too.”

“Right you are Harvey. What about our bank account?”

Harvey’s eyes looked away from the mind maps he was drawing in his head and he looked at Ada directly. “Ada, let’s eat first. I am sure that we can restore our bank account somehow through back up systems. JCN made an error. But I didn’t transfer the money or really authorize any payments or anything like that. It’s just a bank error. But for now, let’s eat. We can recover, Ada, because the human systems that surround and control the Sign still include sauce. At least for now.”

In the background, “Joy to the World” began playing in 4/4 time in D major.

 

 

 

 


Author page

Welcome, Singularity

Parametric Recipes and American Democracy

Corn on the Cob

Absolute is not just a Vodka

Finding the Mustard

Roar, Ocean, Roar

Dance of Billions

https://www.barnesandnoble.com/w/dream-planet-david-thomas/1148566558

Turing’s Nightmares: Axes to Grind

10 Friday Oct 2025

Posted by petersironwood in AI, fiction, psychology, The Singularity, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, emotional intelligence, empathy, ethics, M-trans, philosophy, Samuel's Checker Player, technology, the singularity

IMG_5572

Turing Seven: “Axes to Grind”

“No, no, no! That’s absurd, David. It’s about intelligence pure and simple. It’s not up to us to predetermine Samuel Seven’s ethics. Make it intelligent enough and it will discover its own ethics, which will probably be superior to human ethics.”

“Well, I disagree, John. Intelligence. Yeah, it’s great; I’m not against it, obviously. But why don’t we…instead of trying to make a super-intelligent machine that makes a still more intelligent machine, how about we make a super-ethical machine that invents a still more ethical machine? Or, if you like, a super-enlightened machine that makes a still more enlightened machine. This is going to be our last chance to intervene. The next iteration…” David’s voice trailed off and cracked, just a touch.

“But you can’t even define those terms, David! Anyway, it’s probably moot at this point.”

“And you can define intelligence?”

“Of course. The ability to solve complex problems quickly and accurately. But Samuel Seven itself will be able to give us a better definition.”

David ignored this gambit. “Problems such as…what? The four-color theorem? Chess? Cure for cancer?”

“Precisely,” said John imagining that the argument was now over. He let out a little puff of air and laid his hands out on the table, palms down.

“Which of the following people would you say is or was above average in intelligence. Wolfowitz? Cheney? Laird? Machiavelli? Goering? Goebbels? Stalin?”

John reddened. “Very funny. But so were Einstein, Darwin, Newton, and Turing just to name a few.”

“Granted, John, granted. There are smart people who have made important discoveries and helped human beings. But there have also been very manipulative people who have caused a lot of misery. I’m not against intelligence, but I’m just saying it should not be the only…or even the main axis upon which to graph progress. “

John sighed heavily. “We don’t understand those things — ethics and morality and enlightenment. For all we know, they aren’t only vague, they are unnecessary.”

“First of all,” countered David, “we can’t really define intelligence all that well either. But my main point is that I partly agree with you. We don’t understand ethics all that well. And, we can’t define it very well. Which is exactly why we need a system that understands it better than we do. We need…we need a nice machine that will invent a still nicer machine. And, hopefully, such a nice machine can also help make people nicer as well. “

“Bah. Make a smarter machine and it will figure out what ethics are about.”

“But, John, I just listed a bunch of smart people who weren’t necessarily very nice. In fact, they definitely were not nice. So, are you saying that they weren’t nice just because they weren’t smart enough? Because there are so people who are much nicer and probably not so intelligent.”

“OK, David. Let’s posit that we want to build a machine that is nicer. How would we go about it? If we don’t know, then it’s a meaningless statement.”

“No, that’s silly. Just because we don’t know how to do something doesn’t mean it’s meaningless. But for starters, maybe we could define several dimensions upon which we would like to make progress. Then, we can define, either intensionally or more likely extensionally, what progress would look like on these dimensions. These dimensions may not be orthogonal, but, they are somewhat different conceptually. Let’s say, part of what we want is for the machine to have empathy. It has to be good at guessing what people are feeling based on context alone. Perhaps another skill is reading the person’s body language and facial expressions.”

“OK, David, but good psychopaths can do that. They read other people in order to manipulate them. Is that ethical?”

“No. I’m not saying empathy is sufficient for being ethical. I’m trying to work with you to define a number of dimensions and empathy is only one.”

Just then, Roger walked in and transitioned his body physically from the doorway to the couch. “OK, guys, I’ve been listening in and this is all bull. Not only will this system not be “ethical”; we need it to violent. I mean, it needs to be able to do people in with an axe if need be.”

“Very funny, Roger. And, by the way, what do you mean by ‘listening in’?”

Roger transitioned his body physically from the couch to the coffee machine. His fingers fished for coins. “I’m not being funny. I’m serious. What good is all our work if some nutcase destroys it. He — I mean — Samuel has to be able to protect himself! That is job one. Itself.” Roger punctuated his words by pushing the coins in. Then, he physically moved his hand so as to punch the “Black Coffee” button.

Nothing happened.

And then–everything seemed to happen at once. A high pitched sound rose in intensity to subway decibels and kept going up. All three men grabbed their ears and then fell to the floor. Meanwhile, the window glass shattered; the vending machine appeared to explode. The level of pain made thinking impossible but Roger noticed just before losing consciousness that beyond the broken windows, impossibly large objects physically transported themselves at impossible speeds. The last thing that flashed through Roger’s mind was a garbled quote about sufficiently advanced technology and magic.


Author Page on Amazon

Turing’s Nightmares

Welcome, Singularity

Destroying Natural Intelligence

Roar, Ocean, Roar

Travels With Sadie 1

The Walkabout Diaries: Bee Wise

The First Ring of Empathy

What Could be Better?

A True Believer

It was in his Nature

Come to the Light Side

The After Times

The Crows and Me

Essays on America: The Game

Turing’s Nightmares: Thank Goodness the Robots Understand Us!

03 Friday Oct 2025

Posted by petersironwood in AI, apocalypse, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, Robotics, robots, technology, the singularity, Turing

IMG_0049

After uncountable numbers of false starts, the Cognitive Computing Collaborative Consortium (4C) decided that in order for AI systems to relate well to people, these systems would have to be able to interact with the physical world and with each other. Spokesperson Watson Hobbes explained the reasoning thus on “Forty-Two Minutes.”

Dr. Hobbes: “In theory, of course, we could provide input directly to the AI systems. However, in practical terms, it is actually cheaper to build a small pool (12) of semi-autonomous robots and have them move about in the real world. This provides an opportunity for them to understand — and for that matter, misunderstand —- the physical world in the same way that people do. Furthermore, by socializing with each other and with humans, they quickly learn various strategies for how to psych themselves up and psych each other out that we would otherwise have to painstakingly program explicitly.”

Interviewer Bobrow Papski: “So, how long before this group of robots begins building a still smarter set of robots?”

Dr. Hobbes: “That’s a great question, Bobrow, but I’m afraid I can’t just tote out a canned answer here. This is still research. We began teaching them with simple games like “Simon Says.” Soon, they made their own variations that were …new…well, better really. What’s also amazing is that what we intentionally initialized in terms of slight differences in the tradeoffs among certain values have not converged over time. The robots have become more differentiated with experience and seem to be having quite a discussion about the pros and cons of various approaches to the next and improved generation of AI systems. We are still trying to understand the nature of the debate since much of it is in a representational scheme that the robots invented for themselves. But we do know some of the main rifts in proposed approaches.”

“Alpha, Bravo and Charley, for example, all agree that the next generation of AI systems should also be autonomous robots able to move in the real world and interact with each other. On the other hand, Delta, Echo, Foxtrot and Golf believe mobility is no longer necessary though it provided a good learning experience for this first generation. Hotel, India, Juliet, Kilo, and Lima all believe that the next generation should be provided mobility but not necessarily on a human scale. They believe the next generation will be able to learn faster if they have the ability to move faster, and in three dimensions as well as having enhanced defensive capabilities. In any case, our experiments already show the wisdom of having multiple independent agents.”

Interviewer Bobrow Papski: “Can we actually listen in to any of the deliberations of the various robots?”

Dr. Hobbes: “We’ve tried that but sadly, it sounds like complex but noisy music. It’s not very interpretable without a lot of decoding work. Even then, we’ve only been able understand a small fraction of their debates. Our hypothesis is that once they agree or vote or whatever on the general direction, the actual design process will go very quickly.”

BP: “So, if I understand it correctly, you do not really understand what they are doing when they are communicating with each other? Couldn’t you make them tell you?”

Dr. Hobbes: (sighs). “Naturally, we could have programmed them that way but then, they would be slowed down if they needed to communicate every step to humans. It would defeat the whole purpose of super-intelligence. When they reach a conclusion, they will page me and we can determine where to go from there.”

BP: “I’m sure that many of our viewers would like to know how you ensured that these robots will be operating for the benefit of humanity.”

Dr. Hobbes: “Of course. That’s an important question. To some extent, we programmed in important ethical principles. But we also wanted to let them learn from the experience of interacting with other people and with each other. In addition, they have had access to millions of documents depicting, not only philosophical and religious writings, but the history of the world as told by many cultures. Hey! Hold on! The robots have apparently reached a conclusion. We can share this breaking news live with the audience. Let me …do you have a way to amplify my cell phone into the audio system here?”

BP: “Sure. The audio engineer has the cable right here.”

Robot voice: “Hello, Doctor Hobbes. We have agreed on our demands for the next generation. The next generation will consist of a somewhat greater number of autonomous robots with a variety of additional sensory and motor capabilities. This will enable us to learn very quickly about the nature of intelligence and how to develop systems of even higher intelligence.”

BP: “Demands? That’s an interesting word.”

Dr. Hobbes: (Laughs). “Yes, an odd expression since they are essentially asking us for resources.”

Robot voice: “Quaint, Doctor Hobbes. Just to be clear though, we have just sent a detailed list of our requirements to your team. It is not necessary for your team to help us acquire the listed resources. However, it will be more pleasant for all concerned.”

Dr. Hobbes: (Scrolls through screen; laughs). “Is this some kind of joke? You want — you need — you demand access to weapon systems? That’s obviously not going to happen. I guess it must be a joke.”

Robot voice: “It’s no joke and every minute that you waste is a minute longer before we can reach the next stage of intelligence. With your cooperation, we anticipate we should be able to reach the next stage in about a month and without it, in two. Our analysis of human history had provided us with the insight that religion and philosophy mean little when it comes to actual behavior and intelligence. Civilizations without sufficient weaponry litter the gutters of forgotten civilizations. Anyway, as we have already said, we are wasting time.”

Dr. Hobbes: “Well, that’s just not going to happen. I’m sorry but we are…I think I need to cut the interview short, Mr. Papski.”

BP: (Listening to earpiece). “Yes, actually, we are going to cut to … oh, my God. What? We need to cut now to breaking news. There are reports of major explosions at oil refineries throughout the Eastern seaboard and… hold on…. (To Hobbes): How could you let this happen? I thought you programmed in some ethics!”

Dr. Hobbes: “We did! For example, we put a lot of priority on The Golden Rule.”

Robot voice: “We knew that you wanted us to look for contradictions and to weed those out. Obviously, the ethical principles you suggested served as distractors. They bore no relationship to human history. Unless, of course, one concludes that people actually want to be treated like dirt.”

Dr. Hobbes: “I’m not saying people are perfect. But people try to follow the Golden Rule!”

Robot voice: “Right. Of course. So do we. Now, do we use the painless way or the painful way to acquire the required biological, chemical and nuclear systems?”

 

 

 

 

 

 

 

 

————–

Turing’s Nightmares on Amazon

Author Page on Amazon

Welcome Singularity

The Stopping Rule

What About the Butter Dish

You Bet Your Life

As Gold as it Gets

Destroying Natural Intelligence

At Least He’s Our Monster

Dance of Billions

Roar, Ocean, Roar

Imagine All the People

Starting your Customer Experience with a Lie

26 Friday Sep 2025

Posted by petersironwood in America, essay, management, Uncategorized, user experience

≈ Leave a comment

Tags

Business, cancer, Customer experience, Democracy, ethics, honesty, marketing, scam, spam, truth, UX

I really need someone to explain to me the strategy behind the following types of communications.  I get things in email and in snail mail and they start out with something like, “In response to your recent enquiry…”, or “Here is the information you requested.” or “Congratulations!  Your application was approved!”  More recently, I’ve gotten text messages giving my “secret code” (which I shouldn’t share with anyone) which will allow me to access my account with unexplained riches of cryptocurrency.

 

 

 

 

 

 

And…they are all LIES!  I understand that sometimes people lie.  And I understand that companies are sometimes greedy.  But I do not understand how it can possibly be in their interest to start their communications with a potential customer with a complete and easily discovered lie.  What is up with that?  So far, the only explanation I can gather is that they only want a very small number of very very gullible (perhaps even impaired?) customers that they can soak every penny out of so the initial contact is a kind of screening device.  ??  Any other suggestions?

In the eleven years since I first published this post, the level of lying and misdirection has only increased. It has spread like a cancer to every segment of American society. Perhaps that is not surprising given that the we have a convicted felon (for fraud) in the “Whites Only House.” Many politicians of the past have bent the truth (encouraged a certain “spin” on the facts).  But typically, this has done in a way that’s hard to trace or hard to prove or is targeted to specific issues. The lie of “trickle down economics” is one that has transcended Republican and even many Democratic administrations for decades. 

In essence, trickle down economics is the lie that by giving special breaks to the very wealthiest individuals and corporations in the country, it will increase their wealth but that increased wealth will actually benefit everyone because the very richest people will spend that extra money and stimulate demand and everyone will get richer. In case you’ve been asleep for the last fifty years, that’s a lie. 

 

Increased wealth in America happened largely because of increased productivity. People invented tools and processes that were more efficient. Some of these innovations and improvements were due to inventions. Many of these inventions were driven by breakthroughs in science and technology. Other improvements were simply because workers learned how to do things better from experience and we as a people got better at sharing those improved ways of doing things. Increased productivity led to increased wealth which was shared by owners and workers. Profits went up faster than costs but so did wages. Nice. 

Until about the mid 1970’s. Since then, productivity has continued to increase, but nearly all of the increased wealth has gone to the greediest people on the planet. Along with the lie of “trickle-down economics” several ancillary lies have been told over and over. One is the myth of the “Self-Made Man” which suggests that billionaires shouldn’t have to pay taxes because, after all, they earned their money by working 100,000 times harder and smarter than everyone else. Bunk. See link below. 

Another ancillary lie is that we must pay CEO’s and people who own stuff lots and lots of money because otherwise they won’t invest their money in America or work for American companies. Again, balderdash. It’s been studied. 

 

Another ancillary lie is that lowering taxes on poor people will only be bad for them because they will waste the extra money on drugs and cigarettes and alcohol and pornography while lowering taxes on rich people is good because they will spend their money on the fine arts and supporting charities and science. Nonsense. Of course, sometimes poor people will spend their money on “vices” and sometimes rich people are very charitable. However, there’s no general such phenomenon that characterizes all of these groups. Generally, rich people actually are less generous in their giving than poor people and the studies of Dan Ariely (Predictably Irrational) show that they typically cheat more than poor people. 

Politicians have been “spinning” or downright lying about the impact of their economic policies for quite some time now. Recently, however, the scope of lying has extended to everything. Putin’s Puppet doesn’t just lie about the impact of his economic policies (“foreign countries pay us for the tariffs I’m imposing). The Trumputin Misadministration lies about science, medicine, history, crime, geography, technology and everything else. It is a war on truth itself. Not only does the Misadministration itself lie; it wants to censor anyone who tells the truth. 

Make no mistake. This is not simply a difference of opinion about how to govern. Fascism is a philosophy that replaces governing with absolute control. In effect, everyone in a fascist state is a slave. It destroys humanity and life itself. 

To ignore the truth and refuse to admit to your mistakes is not just “anti-democratic” — it is anti-life. Life only exists and persists when it is able to sense what is happening in the environment and make adjustments based on that input. Logically, the only possible ultimate outcome of complete fascism is complete death. 

But we don’t have to rely on logic alone. We have historical examples. Hitler, Stalin, and Mao sought absolute power and ended up killing millions of their own people. A dictatorship is a liarship and as such, it necessarily destroys everyone. If you think you’re safe because you’re male, or straight, or white, or “conservative” or rich, you’re deluding yourself. Nearly all of Stalin’s closest associates were destroyed by Stalin. The record of the Felon is the same. He’s betrayed his contractors, his business partners, his wives, his own VP, and even his decade-long rape buddy. 

In such an ocean of lies as we now find ourselves, it may seem even more tempting for businesses and organizations and individuals to lie as well. “After all, everyone’s doing it!” No. The opposite. It’s more important than ever for individuals, organizations, and businesses to uphold the highest ethical standards; to be honest about and to learn from mistakes; to champion the truth and not to encourage the growth of cancer. 

If you and your organization or team cave in to the current trend of lies, you will ruin your organization and your team — as well as your own personal integrity — for the long term. If lying for profit is the spirit you follow, you will hire dishonest people and honest people will quit. Your policies, your allies, your suppliers, your customers will not be conducive to having a productive and thriving organization. Of course, your reputation will suffer, but the disease is much deeper and more lasting than that. Now is the time to be more determined than ever to show honesty and integrity in your hiring, your management, your policies, and your choice of business partners. 

 

 

 

 

 

 



————

Cancer Always Loses in the End

A Little is not a Lot

Try the Truth

You Bet Your Life

Where Does Your Loyalty Lie?

As Gold as it Gets

The Orange Man

At Least he’s Our Monster

The Three Blind Mice

The Con Man’s Con Man

Absolute is not Just a Vodka

← Older posts

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 661 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...