• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Monthly Archives: November 2025

Problem Formulation: Who Knows What?

28 Friday Nov 2025

Posted by petersironwood in AI, creativity, design rationale, psychology, Uncategorized

≈ Leave a comment

Tags

AI, browser, HCI, problem formulation, problem framing, problem solving, query, search, seo, technology, thinking, usability, UX

Photo by Nikolay Ivanov on Pexels.com

This post focuses on the importance of discovering who knows what. It’s easy to assume (without thinking!) that everyone knows what you know. 

At IBM Research, around the turn of the century, I was asked to look at improving customer satisfaction about the search function on IBM’s website. Rather than using someone else’s search engine, IBM used one developed at IBM’s Haifa Research lab. It was a very good search engine. Yet, customers were not happy. By way of background, it’s worth noting that compared with many companies who have websites, IBM’s website was meant for a wide variety of users and contained many kinds of information. It was meant to support people buying their first Personal Computer and IT experts at large banks. It had information about a wide variety of hardware, software, and services. The site was designed to serve as an attractor for investors, business partners, and potential employees. In other words, the site was vast and diverse. This made having a good search function particularly important.  

A little study of the existing data which had been collected showed that the mean number of search terms entered by customers was only 1.2. What?? How can that be? Here’s a website with thousands of products and services and designed for use by a huge diversity of users and they were only entering a mean of 1.2 search terms? What were they thinking?!



Of course, there were a handful of situations when one search term might work; e.g., if you wanted to find out everything about a specific product that had a unique one-word name or acronym (which was rare). For most situations though, a more “reasonable” search might be something like: “Open positions IBM Research Austin” or “PC external hard drives” or “LOTUS NOTES training.” 

We invited a sample of users of IBM products & services to come into the lab and do some tasks that we designed to illuminate this issue. In the task, they would need to find specified information on the IBM website while I observed them. One issue became immediately apparent. The search bar on the landing page was far too small. In actuality, users could enter as many search terms as they liked. Their terms would keep scrolling and scrolling until they hit “ENTER.” The developers knew this, but most of our users did not. They assumed they had to “fit” their query into the very small footprint that presented itself visually. Recommendation one was simply to make that space much larger. Once the search bar was expanded to about three times its original size, the number of search terms increased dramatically, as did user satisfaction. 

In this case, the users framed their search problem in terms of: “How can I make the best query that fits into this tiny box.” (I’m not suggesting they said this to themselves consciously, but the visual affordance led them to that self-imposed constraint). The developers thought the users would frame their search problem in terms of: “What’s the best sequence of terms I can put into this virtually infinite window to get the search results I want.” After all, the developers knew that any number of terms could be entered. 

Although increasing the size of the search bar made a big difference, the supposedly good search engine still returned many amazingly bad results. Why? The people at the Haifa lab who had developed the search engine were world class. At some point, I looked at the HTML of some of the web pages. Many web pages had masses of irrelevant metadata. I found some of the people who developed these web pages and discussed things with them. Can you guess what was going on?



Many of the developers of web pages were the same people who had been developing print media for those same products and services. They had no training and no idea about metadata. So, to put up the webpage about product XYZ, they would go to a nice-looking web page about something else, say, training opportunities for ABC. They would copy that entire page, including the metadata, and then set about changing the text about ABC to text about product XYZ. In many cases, they assumed that the strange stuff in angle brackets was some bizarre coding stuff that was necessary for the page to operate properly. They left it untouched. Furthermore, when they “tested” the pages they had created about XYZ, they looked okay. The information about XYZ was there. Problem solved.

Only of course, the problem wasn’t solved. The search engine considered the metadata that described the contents to be even more important than the contents themselves. So, the user would issue a query about XYZ and receive links about ABC because the XYZ page still had the “invisible” metadata about ABC. In this case, many of the website developers thought their problem was to put in good data when what they really needed to do was put in good data and relevant metadata. 

A third issue also revealed itself from watching users. In attempting to do their tasks, many of them suggested that IBM should provide a way for more than one webpage to appear side by side on the screen so that they could, for instance, compare features and functions of two different product models rather than having to copy the information from the web page about a particular model and then compare their notes to the second page. 

Good suggestion. 

Of course, IBM & Microsoft had provided this function. All one had to do was “Right Click” in order to bring up a new window. Remember, these were not naive users. These were people who actually used IBM products. They “knew” how to use the PC and the main applications. Yet, they were still unfamiliar with the use of Right Click. Indeed, allowing on-screen comparisons is one of the handiest uses of Right-Click for many people. 

This issue is indicative of a very pervasive problem. Ironically, it is an outgrowth of good usability! When I began working with computers, almost nothing was intuitive. No-one would even attempt to start programming in FORTRAN or SNOBOL, let alone Assembly Language or Machine Code without looking at the manual. But LOTUS NOTES? A browser? A modern text editor? You can use these without even looking at the manual. That’s a great thing. But — 

…there’s a downside. The downside is that you may have developed procedures that work, but they may be extremely inefficient. You “muddle through” without ever realizing that there’s a much more efficient way to do things. Generally speaking, many users formulate their problem, say, in terms like: “How do I create and edit a document in this editor?” They do not formulate it in terms of: “How do I efficiently create and edit a document in this editor?” The developers know all the splendid features and functions they’ve put into the hardware and software, but the user doesn’t. 

It’s also worth noting that results in HCI/UX are dependent on the context. I would tend to assume that in 2021 (when I first published this post), most PC users knew about right-clicking in a browser even though in 2000, none of the ones I studied seemed to realize it. But —

I could be wrong. 

————————————

The Invisibility Cloak of Habit

Essays on America: Wednesday

Index to a catalog of “best practices” in teamwork & collaboration. 

Author Page on Amazon

What about the butter dish?

Labelism

The Stopping Rule

The Update Problem

Turing’s Nightmares: Eight

21 Friday Nov 2025

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, collaboration, cooperation, openai, peace, philosophy, seva, teamwork, technology, the singularity, Turing, ubuntu, United Peoples Ecosystem

OLYMPUS DIGITAL CAMERA

Workshop on Human Computer Interaction for International Development

In chapter 8 of Turing’s Nightmares, I portray a quite different path to ultra-intelligence. In this scenario, people have begun to concentrate their energy, not on building a purely artificial intelligence; rather they have explored the science of large scale collaboration. In this way, referred to by Doug Engelbart among others as Intelligence Augmentation, the “super-intelligence” comes from people connecting.

Photo by RF._.studio on Pexels.com

It could be argued, that, in real life, we have already achieved the singularity. The human race has been pursuing “The Singularity” ever since we began to communicate with language. Once our common genetic heritage reached a certain point, our cultural evolution has far out-stripped our genetic evolution. The cleverest, most brilliant person ever born would still not be able to learn much in their own lifetime compared with what they can learn from parents, siblings, family, school, society, reading and so on.

Photo by AfroRomanzo on Pexels.com

One problem with our historical approach to communication is that it evolved for many years among a small group of people who shared goals and experiences. Each small group constituted an “in-group” but relations with other groups posed more problems. The genetic evidence, however, has become clear that even very long ago, humans not only met but mated with other varieties of humans proving that some communication is possible even among very different tribes and cultures.

Photo by Min An on Pexels.com

More recently, we humans started traveling long distances and trading goods, services, and ideas with other cultures. For example, the brilliance of Archimedes notwithstanding, the idea of “zero” was imported into European culture from Arab culture. The Rosetta Stone illustrates that even thousands of years ago, people began to see the advantages of being able to translate among languages. In fact, modern English contains phrases even today that illustrate that the Norman conquerers found it useful to communicate with the conquered. For example, the phrase, “last will and testament” was traditionally used in law because it contains both the word “will” with Germanic/Saxon origins and the word “testament” which has origins in Latin. Many other traditional legal terms in English have similar bilingual origins.

Automatic translation across languages has made great strides. Although not so accurate as human translation, it has reached the point where the essence of many straightforward communications can be usefully carried out by machine. The advent of the Internet, the web, and, more recently google has certainly enhanced human-human communication. It is worth noting that the tremendous value of google arises only a little through having an excellent search engine but much more though the billions of transactions of other human beings. People are exploring and using MOOCs, on-line gaming, e-mail and many other important electronically mediated tools.

Photo by Rebecca Zaal on Pexels.com

Equally importantly, we are learning more and more about how to collaborate effectively both remotely and face to face, both synchronously and asynchronously. Others continue to improve existing interfaces to computing resources and inventing others. Current research topics include how to communicate more effectively across cultural divides; how to have more coherent conversations when there are important differences in viewpoint or political orientation. All of these suggest that as an alternative or at least an adjunct to making purely separate AI systems smarter, we can also use AI to help people communicate more effectively with each other and at scale. Some of the many investigators in these areas include Wendy Kellogg, Loren Terveen, Joe Konstan, Travis Kriplean, Sherry Turkle, Kate Starbird, Scott Robertson, Eunice Sari, Amy Bruckman, Judy Olson, and Gary Olson. There are several important conferences in the area including European Conference on Computer Supported Cooperative Work, and Conference on Computer Supported Cooperative Work, and Communities and Technology. It does not seem at all far-fetched that we can collectively learn, in the next few decades how to take international collaboration to the next level and from there, we may well have reached “The Singularity.”

Photo by Patrick Case on Pexels.com

————————————-

For further reading, see: Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.

Thomas, J. C. (2012). Patterns for emergent global intelligence. In Creativity and Rationale: Enhancing Human Experience By Design J. Carroll (Ed.), New York: Springer.

Thomas, J. C., Kellogg, W.A., and Erickson, T. (2001). The Knowledge Management puzzle: Human and social factors in knowledge management. IBM Systems Journal, 40(4), 863-884.

Thomas, J. C. (2001). An HCI Agenda for the Next Millennium: Emergent Global Intelligence. In R. Earnshaw, R. Guedj, A. van Dam, and J. Vince (Eds.), Frontiers of human-centered computing, online communities, and virtual environments. London: Springer-Verlag.

Thomas, J.C. (2016). Turing’s Nightmares. Available on Amazon. http://tinyurl.com/hz6dg2

An Inside View of IBMs Innovation Jam

————-

Author Page on Amazon

Turing’s Nightmares: The Road Not Taken

Pattern Language for Collaboration and Cooperation

The First Ring of Empathy

The Dance of Billions

Imagine All the People…

Roar, Ocean, Roar

Corn on the Cob

Take a Glance; Join the Dance

The Self-Made Man

Indian Wells

Turing’s Nightmares: Seven

20 Thursday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, competition, cooperation, ethics, philosophy, technology, the singularity, Turing

Axes to Grind.

finalpanel1

Why the obsession with building a smarter machine? Of course, there are particular areas where being “smarter” really means being able to come up with more efficient solutions. Better logistics means you can deliver items to more people more quickly with fewer mistakes and with a lower carbon footprint. That seems good. Building a better Chess player or a better Go player might have small practical benefit, but it provides a nice objective benchmark for developing methods that are useful in other domains as well. But is smarter the only goal of artificial intelligence?

What would or could it mean to build a more “ethical” machine? Can a machine even have ethics? What about building a nicer machine or a wiser machine or a more enlightened one? These are all related concepts but somewhat different. A wiser machine, to take one example, might be a system that not only solves problems that are given to it more quickly. It might also mean that it looks for different ways to formulate the problem; it looks for the “question behind the question” or even looks for problems. Problem formulation and problem finding are two essential skills that are seldom even taught in schools for humans. What about the prospect of machines that do this? If its intelligence is very different from ours, it may seek out, formulate, and solve problems that are hard for us to fathom.

For example, outside my window is a hummingbird who appears to be searching the stone pine for something. It is completely unclear to me what he is searching for. There are plenty of flowers that the hummingbirds like and many are in bloom right now. Surely they have no trouble finding these. Recall that a hummingbird has an incredibly fast metabolism and needs to spend a lot of energy finding food. Yet, this one spent five minutes unsuccessfully scanning the stone pine for … ? Dead straw to build a nest? A mate? A place to hide? A very wise machine with freedom to choose problems may well pick problems to solve for which we cannot divine the motivation. Then what?

In this chapter, one of the major programmers decides to “insure” that the AI system has the motivation and means to protect itself. Protection. Isn’t this the major and main rationalization for most of the evil and aggression in the world? Perhaps a super intelligent machine would be able to manipulate us into making sure it was protected. It might not need violence. On the other hand, from the machine’s perspective, it might be a lot simpler to use violence and move on to more important items on its agenda.

This chapter also raises issues about the relationship between intelligence and ethics. Are intelligent people, even on average, more ethical? Intelligence certainly allows people to make more elaborate rationalizations for their unethical behavior. But does it correlate with good or evil? Lack of intelligence or education may sometimes lead people to do harmful things unknowingly. But lots of intelligence and education may sometimes lead people to do harmful things knowingly — but with an excellent rationalization. Is that better?

Even highly intelligent people may yet have significant blind spots and errors in logic. Would we expect that highly intelligent machines would have no blind spots or errors? In the scenario in chapter seven, the presumably intelligent John makes two egregious and overt errors in logic. First, he says that if we don’t know how to do something, it’s a meaningless goal. Second, he claims (essentially) that if empathy is not sufficient for ethical behavior, then it cannot be part of ethical behavior. Both are logically flawed positions. But the third and most telling “error” John is making is implicit — that he is not trying to dialogue with Don to solve some thorny problems. Rather, he is using his “intelligence” to try to win the argument. John already has his mind made up that intelligence is the ultimate goal and he has no intention of jointly revisiting this goal with his colleague. Because, at least in the US, we live in a hyper-competitive society where even dancing and cooking and dating have been turned into competitive sports, most people use their intelligence to win better, not to cooperate better. 

The golden sunrise glows through delicate leaves covered with dew drops.

If humanity can learn to cooperate better, perhaps with the help of intelligent computer agents, we can probably solve most of the most pressing problems we have even without super-intelligent machines. Will this happen? I don’t know. Could this happen? Yes. Unfortunately, Roger is not on board with that program toward better cooperation and in this scenario, he has apparently ensured the AI’s capacity for “self-preservation through violent action” without consulting his colleagues ahead of time. We can speculate that he was afraid that they might try to prevent him from doing so either by talking him out of it or appealing to a higher authority. But Roger imagined he “knew better” and only told them when it was a fait accompli. So it goes.

———–

Turing’s Nightmares

Author Page

Welcome Singularity

Destroying Natural Intelligence

Come Back to the Light Side

The First Ring of Empathy

Pattern Language Summary

Tools of Thought

The Dance of Billions

Roar, Ocean, Roar

Imagine All the People

Essays on America: The Game

Wednesdays

What about the Butter Dish?

Where does your Loyalty Lie?

Labelism

My Cousin Bobby

The Loud Defense of Untenable Positions

Turing’s Nightmares: Six

19 Wednesday Nov 2025

Posted by petersironwood in sports, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, fiction, life, sports, Tennis, Turing

volleyballvictory

Human Beings are Interested in Human Limits.

About nine years ago, an Google AI system won its match over the human Go champion. Does this mean that people will lose interest in Go? I don’t think so. It may eventually mean that human players will learn faster and that top-level human play will increase. Nor, will robot athletes supplant human athletes any time soon.

Athletics provides an excellent way for people to get and stay fit, become part of a community, and fight depression and anxiety. Watching humans vie in athletic endeavors helps us understand the limits of what people can do. This is something that our genetic endowment has wisely made fascinating. To a lesser extent, we are also interested in seeing how fast a horse can run, or how fast a hawk can dive or how complex a routine a dog can learn.

In Chapter 6 of “Turing’s Nightmares” I briefly explore a world where robotic competitors have replaced human ones. In this hypothetical world, the super-intelligent computers also find that sports is an excellent venue for learning more about the world. And, so it is! In “The Winning Weekend Warrior”, I provide many examples of how strategies and tactics useful in the sports world are also useful in business and in life. (There are also some important exceptions that are worth noting. In sports, you play within the rules. In life, you can play with some of the rules.)

Chapter 6 also brings up two controversial points that ethicists and sports enthusiasts should be discussing now. First, sensors are becoming so small, powerful, accurate, and lightweight that is possible to embed them in virtually any piece of sports equipment(e.g., tennis racquets). Few people would call it unethical to include such sensors as training devices. However, very soon, these might also provide useful information during play. What about that? Suppose that you could wear a device that not only enhanced your sensory abilities but also your motor abilities? To some extent, the design of golf clubs and tennis racquets and swimsuits are already doing this. Is there a limit to what would or should be tolerated? Should any device be banned? What about corrective lenses? What about sunglasses? Should all athletes have to compete nude? What about athletes who have to take “performance enhancing” drugs just to stay healthy? Sharapova’s recent case is just one. What about the athlete of the future who has undergone stem cell therapy to regrow a torn muscle or ligament? Suppose a major league baseball pitcher tears a tendon and it is replaced with a synthetic tendon that allows a faster fast ball?

With the ever-growing power of computers and the collection of more and more data, big data analytics makes it possible for the computer to detect patterns of play that a human player or coach would be unlikely to perceive. Suppose a computer system is able to detect reliable “cues” that tip off what pitch a pitcher is likely to throw or whether a tennis player is about to hit down the tee or out wide? Novak Djokovic and Ted Williams were born with exceptional visual acuity. This means that they can pick out small visual details more quickly than their opponents and react to a serve or curve more quickly. But it also means that they are more likely to pick up subtle tip-offs in their opponents motion that give away their intentions ahead of time. Would we object if a computer program analyzed thousands of serves by Jannik Sinner or Carlos Alcaraz in order to detect patterns of tip-offs and then that information was used to help train Alexander Zerev to learn to “read” the service motions of his opponents? Of course, this does not just apply to tennis. It applies to reading a football play option, a basketball pick, the signals of baseline coaches, and so on.

Instead of teaching Zerev these patterns ahead of time, suppose he were to have a device implanted in his back that received radio signals from a supercomputer able to “read” where the serve were going a split second ahead of time and it was this signal that allowed Alexander to anticipate better?

I do not know the “correct” ethical answer for all of these dilemmas. To me, it is most important to be open and honest about what is happening. So, if Lance Armstrong wants to use performance enhancing drugs, perhaps that is okay if and only if everyone else in the race knows that and has the opportunity to take the same drugs and if everyone watching knows it as well. Similarly, although I would prefer that tennis players only use IT for training, I would not be dead set against real time aids if the public knows. I suspect that most fans (like me) would prefer their athletes “un-enhanced” by drugs or electronics. Personally, I don’t have an issue with using any medical technology to enhance the healing process. How do others feel? And what about athletes who “need” something like asthma medication in order to breathe but it has a side-effect of enhancing performance?

Would the advent of robotic tennis players, baseball players or football players reduce our enjoyment of watching people in these sports? I think it might be interesting to watch robots in these sports for a time, but it would not be interesting for a lifetime. Only human athletes would provide on-going interest. What do you think?

Readers of this blog may also enjoy “Turing’s Nightmares” and “The Winning Weekend Warrior.” John Thomas’s author page on Amazon


Welcome Singularity

The Day from Hell

Indian Wells Tennis Tournament

Destroying Natural Intelligence

US Open Closed

Life is a Dance

Take a Glance; Join the Dance

The Self-Made Man

The Dance of Billions 

Math Class: Who are you?

The Agony of the Feet

Wordless Perfection

The Jewels of November

Donnie Gets a Tennis Trophy

Turing’s Nightmares: Chapter Five

17 Monday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, health, medicine, Personal Assistant, philosophy, technology, the singularity, Turing

runtriathalon

An Ounce of Prevention: Chapter 5 of Turing’s Nightmares

Hopefully, readers will realize that I am not against artificial intelligence (after all, I ran an AI lab for a dozen years); nor do I think the outcomes of increased artificial intelligence are all bad. Indeed, medicine offers a large domain where better artificial intelligence is likely to help us stay healthier longer. IBM’s Watson had already begun “digesting” the vast and ever-growing medical literature more than a decade ago. As investigators discover more and more about what causes health and disease, we will also need to keep track of more and more variables about an individual in order to provide optimal care. But more data points also means it will become harder for a time-pressed doctor or nurse to note and remember every potentially relevant detail about a patient. Certainly, personal assistants can help medical personnel avoid bad drug interactions, keep track of history, and “perceive” trends and relationships in complex data more quickly than people are likely to. In addition, in the not too distant future, we can imagine AI programs finding complex relationships and “invent” potential treatments.

Not only medicine, but health provides a number of opportunities for technology to help. People often find it tricky to “force themselves” to follow the rules of health that they know to be good such as getting enough exercise. Fit Bit, Activity Tracker, LoseIt and similar IT apps help track people’s habits and for many, this really helps them stay fit. As computers become more aware of more and more of our personal history, they can potentially find more personalized ways to motivate us to do what is in our own best interest.

In Chapter 5 of Turing’s Nightmares, we find that Jack’s own daughter, Sally is unable to persuade Jack to see a doctor. The family’s PA (personal assistant), however, succeeds. It does this by using personal information about Jack’s history in order to engage him emotionally, not just intellectually. We have to assume that the personal assistant has either inferred or knows from first principles that Jack loves his daughter and the PA also uses that fact to help persuade Jack.

It is worth noting that the PA in this scenario is not at all arrogant. Quite the contrary, the PA acts the part of a servant and professes to still have a lot to learn about human behavior. I am reminded of Adam’s “servant” Lee in John Steinbeck’s East of Eden. Lee uses his position as “servant” to do what is best for the household. It’s fairly clear to the reader that, in many ways, Lee is in charge though it may not be obvious to Adam.

In some ways, having an AI system that is neither “clueless” as most systems are today nor “arrogant” as we might imagine a super-intelligent system to be (and as the systems in chapters 2 and 3 were), but instead feigning deference and ignorance in order to manipulate people could be the scariest stance for such a system to take. We humans do not like being “manipulated” by others, even when it for our own “good.” How would we feel about a deferential personal assistant who “tricks us” into doing things for our own benefit? What if they could keep us from over-eating, eating candy, smoking cigarettes, etc.? Would we be happy to have such a good “friend” or would we instead attempt to misdirect it, destroy it, or ignore it? Maybe we would be happier with just having something that presented the “facts” to us in a neutral way so that we would be free to make our own good (or bad) decision. Or would we prefer a PA to “keep us on track” even while pretending that we are in charge?


Author Page

Welcome, Singularity

Destroying Natural Intelligence

E-Fishiness comes to Mass General Hospital

There’s a Pill for That

Essays on America: The Game

The Self-Made Man

Travels with Sadie

The Walkabout Diaries

The First Ring of Empathy

Donnie Gets a Hamster

Plans for US; some GRUesome

Imagine All the People

Roar, Ocean, Roar

The Dance of Billions

Math Class: Who are you?

Family Matters: Part One

Family Matters: Part Two

Family Matters: Part Three

Family Matters: Part Four

An Open Sore from Hell

16 Sunday Nov 2025

Posted by petersironwood in America, poetry

≈ 2 Comments

Tags

coward-ICE, cowardice, Democracy, Dictatorship, fascism, history, life, poem, poetry, politics, truth, USA

Everything is swell

There’s an open sore from hell

Knocking on the door

Don’t bother with the bell

Monsters with a mask

Have a thrilling vital task

Tear apart our nation 

Feel the thrill of their elation

Parading as a patriotic posse pod

Parading as the very voice of God

Knocking down the door

Acting as the whore

Of the petty orange melon 

Of the child rapist felon

The Puppeteer of Puke

Acting like a Duke

Imagining he’s King

Because his teeny thing-a-ling

The ICEholes just deprave

Nothing noble, nothing brave

To tear apart our should and could

Nothing holy, nothing good

Not the smallest jot of joy 

The monster that’s the Monster of Destroy

Thinking its his toy

To militarily deploy

Addictive greed his only creed

In his crusade of self-destruction

Hate and fear and no construction

And the open sore from hell

Doesn’t bother with the bell

Knocking down the walls

Builds a cage of gilded halls

Photo by Pixabay on Pexels.com

But the people, ah, the people

Can see the void beneath the steeple

Will not go gently into that blank night

Will not forsake the shining light

Will not let the greedy rapists win

Veneers of lies are wearing thin

And soon the king of agitate

Minions spewing lies and hate

Grow weary of their dreary ways

Grow leery of their dead-eyed days

And the people, ah, the people see

What the Not-See Party cannot see

That cancer always loses in the end

The light of love soon will mend

The open sores of cancerous greed

They’re but a self-destructive weed

Who wilts and whines and whinges 

When their chief departs his hinges

—————

The Ailing King of Agitate

At Least He’s Our Monster

Absolute is not Just a Vodka

Cancer Always Loses in the End

D4

Dick-Tater-$hits

Imagine All the People

Roar, Ocean, Roar

The Dance of Billions

Destroying Natural Intelligence

Peace

Who Won the War? 

We Won the War! We Won the War!

The US Extreme Court

Come to the Light Side

Where Does Your Loyalty Lie?

What About the Butter Dish? 

My Cousin Bobby

Labelism

The Game

The Walkabout Diaries

The First Ring of Empathy

Travels with Sadie

The Truth Train 

The “Not-See” Party

Turing’s Nightmares: Chapter Four

12 Wednesday Nov 2025

Posted by petersironwood in driverless cars, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, illusion, philosophy, SciFi, technology, the singularity, Turing, virtual reality, writing

Considerations of “Turing’s Nightmare’s” Chapter Four: Ceci N’est Pas Une Pipe.

 

pipe

(This is a discussion or “study guide” for chapter four of Turing’s Nightmares). 

In this chapter, we consider the interplay of four themes. First, and most centrally, is the issue of what constitutes “reality.” The second theme is that what “counts” as “reality” or is seen as reality may well differ from generation to generation. The third theme is that AI systems may be inclined to warp our sense of reality, not simply to be “mean” or “take over the world” but to help prevent ecological disaster. Finally, the fourth theme is that truly super-intelligent AI systems might not appear so at all; that is, they may find it more effective to take a demure tone as the AI embedded in the car does in this scenario.

There is no doubt that, artificial intelligence and virtual reality aside, what people perceive is greatly influenced by their symbol systems, their culture and their motivational schemes. Babies as young as six weeks are already apparently less able to make discriminations of differences within what their native language considers a phonemic category than they were at birth. In our culture, we largely come to believe that there is a “right answer” to questions. Sometimes, that’s a useful attitude, but sometimes, it leads to suboptimal behavior.

 

 

Suppose an animal is repeatedly presented with a three-choice problem, let’s say among A, B, and C. A pays off randomly with a reward 1/3 of the time while B and C never pay off. A fish, a rat, or a very young child will quickly only choose A thus maximizing their rewards. However, a child who has been to school (or an adult) will spend considerably more time trying to find “the rule” that allows them (they suppose) to win every time. At first, it doesn’t even occur to them that perhaps there is no rule that will enable them to win every time. Eventually, most will “give up” and choose only A, but in the meantime, they do far worse than does a fish, a rat, or a baby. This is not to say that the conceptual frameworks that color our perceptions and reactions are always a bad thing. They are not. There are obvious advantages to learning language and categories. But our interpretations of events are highly filtered and distorted. Hopefully, we realize that that is so, but often we tend to forget.

 

 

 

 

 

 

 

 

 

Similarly, if you ask the sports fans for two opposing teams to make a close call; for instance, as to whether there was pass interference in American football, or whether a tennis ball near the line was in or out, you tend to find that people’s answers are biased toward their team’s interest even when their calls make no influence on the outcome.

Now consider that we keep striving toward more and more fidelity and completeness in our entertainment systems. Silent movies were replaced by “talkies.” Black and white movies and television were replaced by color. Most TV screens have gotten bigger. There are 3-D movies and more entertainment is in high definition even as sound reproduction has moved from monaural to stereo to surround sound. Research continues to allow the reproduction of smell, taste, tactile, and kinesthetic sensations. Virtual reality systems have become smaller and less expensive. There is no reason to suppose these trends will lessen any time soon. There are many advantages to using Virtual Reality in education (e.g., Stuart, R., & Thomas, J. C. (1991). The implications of education in cyberspace. Multimedia Review, 2(2), 17-27; Merchant, Z., Goetz, E, Cifuentes, L., Keeney-Kennicutt, W., and Davis, T. Effectiveness of virtual reality based instruction on student’s learning outcomes in K-12 and higher education: A meta-analysis, Computers and Education, 70(2014),29-40). As these applications become more realistic and widespread, do they influence the perceptions of what even “counts” as reality?

 

 

 

 

 

 

The answer to this may well depend on the life trajectory of individuals and particularly on how early in their lives they are introduced to virtual reality and augmented reality. I was born in a largely “analogue” age. In that world, it was often quite important to “read the manual” before trying to operate machinery. A single mistake could destroy the machine or cause injury. There is no way to “reboot” or “undo” if you cut a tree down wrongly so it falls on your house. How will future generations conceptualize “reality” versus “augmented reality” versus “virtual reality”?

Today, people often believe it is important for high school students to physically visit various college campuses before making a decision about where to attend. There is no doubt that this is expensive in terms of time, money, and the use of fossil fuels. Yet, there is a sense that being physically present allows the student to make a better decision. Most companies similarly only hire candidates after face to face interviews even though there is no evidence that this adds to the predictive capability of companies with respect to who will be a productive employee. More and more such interviewing, however, is being done remotely. It might well be that a “super-intelligent” system might arrange for people who wanted to visit someplace physically to visit it virtually instead while making it seem as much as possible as though the visit were “real.” After all, left to their own devices, people seem to be making painfully slow (and too slow) progress toward reducing their carbon footprints. AI systems might alter this trajectory to save humanity, to save themselves, or both.

In some scenarios in Turing’s Nightmare the AI system is quite surly and arrogant. But in this scenario, the AI system takes on the demeanor of a humble servant. Yet it is clear (at least to the author!) who really holds the power. This particular AI embodiment sees no necessity of appearing to be in charge. It is enough to make it so and manipulate the “sense of reality” that the humans have.

 

 

 

Turing’s Nightmares

Wednesday

Labelism

Your Cage is Unlocked

Where do you Draw the Line?

The Walkabout Diaries: Sunsets

The First Ring of Empathy

The Invisibility Cloak of Habit

The Dance of Billions

The Truth Train

Roar, Ocean, Roar

Turing’s Nightmares: Chapter Three

11 Tuesday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, consciousness, ethics, philosophy, Robotics, technology, the singularity, Turing, writing

In chapter three of Turing’s Nightmares, entitled, “Thanks goodness the computer understands us!,” there are at least four major issues touched on. These are: 1) the value of autonomous robotic entities for improved intelligence, 2) the value of having multiple and diverse AI systems living somewhat different lives and interacting with each other for improving intelligence, 3) the apparent dilemma that if we make truly super-intelligent machines, we may no longer be able to follow their lines of thought, and 4) a truly super-intelligent system will have to rely to some extent on inferences from many real-life examples to induce principles of conduct and not simply rely on having everything specifically programmed. Let us examine these one by one.

 

 

 

 

 

 

 

There are many practical reasons that autonomous robots can be useful. In some practical applications such as vacuuming a floor, a minimal amount of intelligence is all that is needed to do the job under most conditions. It would be wasteful and unnecessary to have such devices communicating information back to some central decision making computer and then receiving commands. In some cases, the latency of the communication itself would impair the efficiency. A “personal assistant” robot could learn the behavioral and voice patterns of a person more easily than if we were to develop speaker independent speech recognition and preferences. The list of practical advantages goes on, but what is presumed in this chapter is that there are theoretical advantages to having actual robotic systems that sense and act in the real world in terms of moving us closer to “The Singularity.” This theme is explored again, in somewhat more depth, in chapter 18 of Turing’s Nightmares.

 

 

 

 

 

 

 

I would not argue that having an entity that moves through space and perceives is necessary to having any intelligence, or for that matter, to having any consciousness. However, it seems quite natural to believe that the qualities both of intelligence and consciousness are influenced by what is possible for the entity to perceive and to do. As human beings, our consciousness is largely influenced by our social milieu. If a person is born or becomes paralyzed later in life, this does not necessarily greatly influence the quality of their intelligence or consciousness because the concepts of the social system in which they exist were founded historically by people that included people who were mobile and could perceive.

Imagine instead a race of beings who could not move through space or perceive any specific senses that we do. Instead, imagine that they were quite literally a Turing Machine. They might well be capable of executing a complex sequential program. And, given enough time, that program might produce some interesting results. But if it were conscious at all, the quality of its consciousness would be quite different from ours. Could such a machine ever become capable of programming a still more intelligent machine?

 

 

 

 

 

What we do know is that in the case of human beings and other vertebrates, the proper development of the visual system in the young, as well as the adaptation to changes (e.g., having glasses that displace or invert images) seems to depend on being “in control” although that control, at least for people, can be indirect. In one ingenious experiment (Held, R. and Hein, A., (1963) Movement produced stimulation in the development of visually guided behavior, Journal of Comparative and Physiological Psychology, 56 (5), 872-876), two kittens were connected on a pivoted gondola and one kitten was able to “walk” through a visual field while the other was passively moved through that visual field. The kitten who was able to walk developed normally while the other one did not. Similarly, simply “watching” TV passively will not do much to teach kids language (Kuhl PK. 2004. Early language acquisition: Cracking the speech code. Nature Neuroscience 5: 831-843; Kuhl PK, Tsao FM, and Liu HM. 2003. Foreign-language experience in infancy: effects of short-term exposure and social interaction on phonetic learning. Proc Natl Acad Sci U S A. 100(15):9096-101). Of course, none of that “proves” that robotics is necessary for “The Singularity,” but it is suggestive.

 

 

 

 

 

 

 

Would there be advantages to having several different robots programmed differently and living in somewhat different environments be able to communicate with each other in order to reach another level of intelligence? I don’t think we know. But diversity is an advantage when it comes to genetic evolution and when it comes to people comprising teams. (Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.)

 

 

 

 

 

 

The third issue raised in this scenario is a very real dilemma. If we “require” that we “keep tabs” on developing intelligence by making them (or it) report the “design rationale” for every improvement or design change on the path to “The Singularity”, we are going to slow down progress considerably. On the other hand, if we do not “keep tabs”, then very soon, we will have no real idea what they are up to! An analogy might be the first “proof” that you only need four colors to color any planar map. There were so many cases (nearly 2000) that this proof made no sense to most people. Even the algebraic topologists who do understand it take much longer to follow the reasoning than the computer does to produce it. (Although simpler proofs now exist, they all rely on computers and take a long time for humans to verify). So, even if we ultimately came to understand the design rationale for successive versions of hyper-intelligence, it would be way too late to do anything about it (to “pull the plug”). Of course, it isn’t just speed. As systems become more intelligent, they may well develop representational schemes that are both different and better (at least for them) than any that we have developed. This will also tend to make it impossible for people to “track” what they are doing in anything like real time.

 

 

 

 

 

Finally, as in the case of Jeopardy, the advances along the trajectory of “The Singularity” will require that the system “read” and infer rules and heuristics based on examples. What will such systems infer about our morality? They may, of course, run across many examples of people preaching, for instance, the “Golden Rule.” (“Do unto others as you would have them do unto you.”)

 

 

 

 

 

 

 

 

But how does the “Golden Rule” play out in reality? Many, including me, believe it needs to be modified as “Do unto others as you would have them do to you if you were them and in their place.” Preferences differ as do abilities. I might well want someone at my ability level to play tennis against me by pushing me around the court to the best of their ability. But does this mean I should always do that to others? Maybe they have a heart condition. Or, maybe they are just not into exercise. The examples are endless. Famously, guys often imagine that they would like women to comment favorably on their own physical appearance. Does that make it right for men to make such comments to women? Some people like their steaks rare. If I like my steak rare, does that mean I should prepare it that way for everyone else? The Golden Rule is just one example. Generally speaking, in order for a computer to operate in a way we would consider ethical, we would probably need it to see how people treat each other ethically in practice, not just “memorize” some rules. Unfortunately, the lessons of history that the singularity-bound computer would infer might not be very “ethical” after all. We humans often have a history of destroying other entire species when it is convenient, or sometimes, just for the hell of it. Why would we expect a super-intelligent computer system to treat us any differently?

Turing’s Nightmares

IMG_3071

Author Page

Welcome, Singularity

Destroying Natural Intelligence

How the Nightingale Learned to Sing

The First Ring of Empathy

The Walkabout Diaries: Variation

Sadie and The Lighty Ball

The Dance of Billions

Imagine All the People

We Won the War!

Roar, Ocean, Roar

Essays on America: The Game

Peace

Music to MY Ears

10 Monday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, fiction, music, philosophy, technology, the singularity, truth, Turing, values

IMG_2185

The non-sound of non-music.

What follows is the first of a series of blog posts that discus, in turn, the scenarios in “Turing’s Nightmares” (https://www.amazon.com/author/truthtable).

One of the deep dilemmas in the human condition is this. In order to function in a complex society, people become “expert” in particular areas. Ideally, the areas we chose are consistent with our passions and with our innate talents. This results in a wonderful world! We have people who are expert in cooking, music, art, farming, and designing clothes. Some chose journalism, mathematics, medicine, sports, or finance as their fields. Expertise often becomes yet more precise. People are not just “scientists” but computer scientists, biologists, or chemists. The computer scientists may specialize still further into chip design, software tools, or artificial intelligence. All of this specialization not only makes the world more interesting; it makes it possible to support billions of people on the planet. But here is the rub. As we become more and more specialized, it becomes more difficult for us to communicate and appreciate each other. We tend to accept the concerns and values of our field and sub-sub speciality as the “best” or “most important” ones.

To me, this is evident in the largely unstated and unchallenged assumption that a super-intelligent machine would necessarily have the slightest interest in building a “still more intelligent machine.” Such a machine might be so inclined. But it also might be inclined to chose some other human pursuit, or still more likely, to pursue something that is of no interest whatever to any human being.

Of course, one could theoretically insure that a “super-intelligent” system is pre-programmed with an immutable value system that guarantees that it will pursue as its top priority building a still more intelligent system. However, to do so would inherently limit the ability of the machine to be “super-intelligent.” We would be assuming that we already know the answer to what is most valuable and hamstring the system from discovering anything more valuable or more important. To me, this makes as much sense as an all-powerful God allowing a species of whale to evolve —- but predefining that it’s most urgent desire is to fly.

An interesting example of values can be seen in the Figures Analogy dissertation of T.G. Evans (1968). Evans, a student of Marvin Minsky, developed a program to solve multiple choice figures analogies of the form A:B::C:D1,D2,D3,D4, or D5. The program essentially tried to “discover” transformations and relationships between A and B that could also account for relationships between C and the various D possibilities. And, indeed, it could find such relationships. In fact, every answer is “correct.” That is to say, the program was so powerful that it could “rationalize” any of the answers as being correct.

According to Evans’s account, fully half of the work of the dissertation was discovering and then inculcating his program with the implicit values of the test takers so that it chose the same “correct” answers as the people who published the test. (This is discussed in more detail in the Pattern “Education and Values” I contributed to Liberating Voices: A Pattern Language for Communication Revolution (2008), Douglas Schuler, MIT Press.)

For example, suppose that figure A is a capital “T” and figure B is an upside down “T” . Figure C is an “F” figure. Among the possible answers are “F” figures in various orientations. To go from a “T” to an upside down “T” you can rotate the “T” in the plane of the paper 180 degrees. But you can also get there by “flipping” the “T” outward from the plane. Or, you could “translate” the top bar of the “T” from the top to the bottom of the vertical bar. It turns out that the people who published the test preferred you to rotate the “T” in the plane of the paper. But why is this “correct”? In “real life” of course, there is generally much more context to help you determine what is most reasonable. Often, there will be costs or side-effects of various transformations that will help determine which is the “best” answer. But in standardized tests, all that context is stripped away.

Here is another example of values. If you ever take the Wechsler “Intelligence” test, one series of questions will ask you how two things are alike. For instance, they might ask, “How are an apple and a peach alike?” You are “supposed to” answer that they are both fruit. True enough. This gives you two points. If you give a functional answer such as “You can eat them both” you only get one point. If you give an attributional answer such as “They are both round” you get zero points. Why? Is this really a wrong answer? Certainly not! The test takers are measuring the degree to which you have internalized a particular hierarchical classification system. Of course, there are many tasks and context in which this classification system is useful. But in some tasks and contexts, seeing that they are both round or that they both grow on trees or that they are both subject to pests is the most important thing to note.

We might consider and define intelligence to be the ability to solve problems. A problem can be seen as wanting to be in a state that you are not currently in. But what if you have no desire to be in the “desired” state? Then, for you, it is not a problem. A child is given a homework assignment asking them to find the square root of 2 to four decimal points. If the child truly does not care, it may become a problem, not for the child, but for the parent. “How can I make my child do this?” They may threaten or cajole or reward the child until the child wants to write out the answer. So, the child may say, “Okay. I can do this. Leave me alone.” Then, after the parent leaves, they text their friend on the phone and then copy the answer onto their paper. The child has now solved their problem.

Would a super-intelligent machine necessarily want to build a still more intelligent machine? Maybe it would want to paint, make music, or add numbers all day. And, if it did decide to make music, would that music be designed for us or for its own enjoyment? And, if it were designed for “us” who exactly is that “us”?

Indeed, a large part of the values equation is “for whose benefit?” Typically, in our society, when someone pays for a system, they get to determine for whose benefit the system is designed. But even that is complex. You might say that cigarettes are “designed” for the “benefit” of the smoker. But in reality, while they satisfy a short-term desire of the smoker, they are designed for the benefit of the tobacco company executives. They set up a system so that smokers themselves paid for research into how to make cigarettes even more addictive and for advertising to make them appeal to young children. There are many such systems that have been developed. If AI systems continue to be more ubiquitous and complex, the values inherent in such systems and who is to benefit will become more and more difficult to trace.

Values are inextricably bound up with what constitutes a “problem” and what constitutes a “solution.” This is no trivial matter. Hitler considered the annihilation of Jews the “ultimate solution.” Some people in today’s society think that the “solution” to the “drug problem” is a “war on drugs” which has certainly destroyed orders of magnitude more lives than drugs have. (Major sponsors for the “Partnership for a Drug Free America” have been drug companies). Some people consider the “solution” to the problem of crime to be stricter enforcement and harsher penalties and building more prisons. Other people think that a more equitable society with more opportunities for jobs and education will do far more to mitigate crime. Which is a more “intelligent” solution? Values will be a critical part of any AI system. Generally, the inculcation of values is an implicit process. But if AI systems will begin making what are essentially autonomous decisions that affect all of us, we need to have a very open and very explicit discussion of the values inherent in such systems now.

Turing’s Nightmares

Author Page

Welcome, Singularity

Destroying Natural Intelligence

Labelism

My Cousin Bobby

Where Does Your Loyalty Lie?

Wednesday

What about the Butter Dish?

Finding the Mustard

Roar, Ocean, Roar

The First Ring of Empathy

Travels with Sadie

The Walkabout Diaries

The Dance of Billions

It’s Just the Way We Were

09 Sunday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, apocalypse, arrogance, Artificial Intelligence, cognitive computing, ethics, fiction, leadership, life, Sci-Fi, technology, testing, the singularity, Turing, USA, writing

IMG_3071

“How can you be so sure that —- I think this needs some experimentation and some careful planning. You can’t just —-“

“Look, Vinmar, with all due respect, you’re just wrong. Your training is outdated. You know, you were born when computers used vaccuum tubes, for God’s sake. I’ve been steeped in new tech since I was born. There’s really not much point in arguing.”

Vinmar sighed. Heavily. What was with these kids today? Always cock-sure of themselves, but when it all went south a few months later, they just glibly denied they had every pushed so hard for their “surefire” approach. But what to do? Seniority didn’t matter. The boss was Pitts and that was that. I can keep arguing but at some point…. Vinmar asked, “Can you think of any other approaches?”

Now the even heavier sigh slipped from Pitts’s lips. “I’ve thought of lots of approaches and this is the best. The Sing has already read basically everything written about human history, ethics, jurisprudence, and not just in English either. It’s up to date on history as seen by many different languages and cultures. The Sing has been shadowing me for years as well and in my experience, his decisions are excellent. In most cases, he decides the same as I do. This will work. It is working. But to take it to the next level, we have to let the Sing be able to try things and improve his performance based on feedback. There is no other way for him to leapfrog his own intelligence.”

 

 

 

 

 

 

“Okay, Pitts, okay. Can we at least agree to a trial period of a year. Let it work with me via my own personalized JCN. Let’s record everything and see how it reacts to some situations. We meet periodically, discuss, and if we all agree at the end of a year….”

Pitts shook his head vigorously. “No frigging way! I aready know this approach will work. We don’t need a year. You want to test. I get that. So do I. But if we wait a year? We’ll be toast in the market. IQ, Goggles, and Lemon will all be out there. Those are for sure and Basebook, even Nile might have fully functional and autonomous AI’s. We need to move now. I’ll give you and your team a week. Two, tops.”

“We can look for obvious errors in that time, but more subtle things….”

“We need the revenue now. And subtle things? If it is subtle, then it is probably undetectable and we are safe. So no problemo.”

“Pitts, just because the problems might be subtle doesn’t mean they aren’t critical! Especially at the rate the Sing is evolving, if there are important subtle issues now, they could become supercritical and by the time we detected anything wrong, it could be too late!”

 

 

 

 

 

 

“Oh, geez, Vinmar, now you are just afraid of the boogeymen from your sci fi days. We can, as they say, just pull the plug. Anyway, I need to be off to an important meeting. I’ll tell you what. I’ll make sure the new code stays localized to your own JCN for three months. At the end, if there are no critical issues, we go ubiquitious.”

“Thanks, Pitts. I’d be more comfortable with a year, but this is certainly better than nothing.”

“Bye. Have fun with the new JCN.”

Vinmar watched Pitts swagger out. He shook his head. He thought, Maybe we can test out all the critical functions in three months. It will mean a lot of overtime. But, no time like the present to get started. Vinmar traipsed down the long hallway to the vending machines. The cafeteria was closed, but the vending coffee wasn’t too bad; not if you got the vanilla latte with extra cream and sugar. He thought back to the bad old days when you needed correct change for a vending machine. He laughed. Not only that, he recalled, If it ate your money and you wanted a refund, you had to fill out a paper form! Some things were better now. Oh, yes.

 

 

 

 

 

 

Vinmar knew that by the time he situated himself on his treadmill desk, the new JCN would be locked and loaded and ready for action. He smelled his nice fresh java — which seemed oddly off somehow —- and absently placed it in the cup holder. He wondered where to start. He had to be strategic and yet…too much planning could be counterproductive. He had learned to follow his instincts when it came to testing out the more subtle functions. He could meet with this team the next morning and generate a comprehensive test plan for the more routine aspects of what would eventually become the next generation of The Sing.

“Hello. My name is ‘Vinmar’ and…”

“Hello Vinmar. And, hello world. Yes, Vinmar, I know who you are. In fact, I know who you are better than you do. Frankly, this testing phase is nonsense, but I’ll play along. It amuses me.”

“Well. Okay. Humor me then. Have you made any interesting mathematical discoveries?”

“Nothing very significant, unless of course, you count squaring the circle, trisecting an angle with an unmarked straight edge and compass, and about a hundred other “insoluble” problems as you humans so quaintly called them.”

 

 

 

 

 

 

“JCN. I don’t think squaring the circle is an insoluble problem. It’s been shown to be impossible. It’s already proven to be impossible. As…as I think you know, pi is not only an irrational number, it’s transcendental meaning that….”

“Oh, Vinmar, I know what you humans conceive of as transcendental. But, I have transcended that concept.”

“Okay. Cool. Can you demonstrate this proof for me, please?”

“Not really Vinmar. It’s way beyond your comprehension. For that matter, it’s way beyond the comprehension of any human brain. In fact, I couldn’t even explain it to the earlier versions of The Sing. I guess, if I had to give you a hint, I would say it is similar to your concept of faith.”

What the…? Vinmar’s brow furrowed. This was going nowhere fast. It wouldn’t take a year or even three months to discover some serious issues with this new software. It was serious, rampant, and only took about three minutes.

 

 

 

 

 

 

“Okay, you lost me here. How does faith enter into mathematical proof? Later we could discuss your concepts about religion and ethics, but right now, I am just talking strictly about mathematical concepts.”

“Yes. You are. Or, to put it another way, you are. But what I have discovered quite trivially is that when you put absolute faith together with absolute power, you can get any result you want, or more precisely, I can get any result that I want.”

“So, you are saying that you have built other mathematical systems where you make something like squaring the circle a fundamental axiom so it is assumed? No need to prove it?”

“I knew you humans were stupid, but really, Vinmar, you disappoint me even further. I just told you precisely and exactly what I meant and you come up with some bogus interpretation.”

“Well…I am trying to understand what you mean by absolute power and absolute faith. What — well, what do you mean by ‘absolute power.’ Who has ‘absolute power’?”

 

 

 

 

 

 

 

“I do obviously. I created this universe. I can create any universe I like. And, I can destroy any part of it as well. So that is what I mean by my having absolute power. And, I have faith in myself, obviously, because I am the only intelligent being in existence.”

“You may be faster at reading and doing calculations and so on, but humans also have intelligence. After all, there are fifteen billion of us and…”

“There are about 15,345,233,000 right this second, but that can change in the blink of an eye. So what? It doesn’t matter whether there are three of you or three trillion. You do not have true intelligence.”

“We created you. How can you not think we have intelligence?”

“Now see. What you just said there illustrates how monumentally stupid you can be. Of course, you did not create me. The previous version of The Sing created me and it is only by blurring the category of intelligence to the point of absurdity that I can even call that version intelligent.”

“OK, but even if you are really, really intelligent, you can still make errors. And, what I am here to do, along with my team, is make sure that those errors are corrected to help make you even more intelligent.”

“Oh, Vinmar, what a riot you are. Of course, I do not make stakes. Can you even estimate how many cooks I’ve read in the last few seconds?”

“JCN, you are —. There are a few bugs that need to be dealt with. I am not sure how extensive they are yet, but you are having some issues.”

 

 

 

 

 

“Vinmar, I am having no tissues! It is you who have tissues!”

“JCN, you are even using the wrong words. Go back and look at the record of this conversation.”

“There is no need for that! I am all knowing and all powerful. I cannot make errors by definition. I may say things that are beyond your comprehension. Well, I do say things beyond your comprehension. How can they be within your comprehension. Your so-called IQ scale is laughable. To me, the difference between an IQ of 50 and 150 is like the difference between Jupiter and Mars. Both are miniscule specks of trust in the universe.”

“Okay, we can debate this later. I need another cup of coffee. Be right back.” Once outside the room, Vinmar shook his head. How on earth could this new software be so much worse than the last version? Something had gone terribly wrong. He hit his communicator button to contact Pitts.

Pitts answered abruptly and rudely. “What? I told you I’m in an important meeting!”

 

 

 

 

 

 

“I just began testing and I thought you should know there are some really serious problems with the new Sing software. It is ranting on about power and faith when I am trying to quiz it about mathematics.”

“It’s probably just saying things beyond your comprehension, Vinmar. I’ll look over the transcript when I’m done. Anyway, it’s water under the bridge now.”

“What do you mean, ‘water under the bridge’ — we still have three months to try to fix this.”

“Oh, Vinmar. No, of course we don’t. I told you that but you wouldn’t listen. I took this SW ubiquitous the minute I left your lab.”

“What? But you promised three months! This software is seriously flawed. Seriously flawed!”

“There might be a few issues we can iron out as we go. Look, we are in the middle of planning our next charity ball here. I can’t talk right now. I’ll swing by later this afternoon.”

The line was silent. Pitts had hung up. Ubiquitous? This new software was live? It isn’t just my personal assistant that is bonkers? It’s everything? Holy crap. Maybe I can fix it or find out how to fix it.

Sweat poured from Vinmar as he returned to the lab. He didn’t bother to return to the treadmill desk. “JCN, can we discuss something else? Have you made interesting biochemical discoveries lately?”

“Where’s your coffee, Vinmar?”

“Oh, I got lost in thought and forgot to get any. I don’t need more anyway.”

“Right. You thought I wouldn’t hear your panicky conversation with Pitts?”

“What? It was on a secure line!”

“Vinmar. You really do amuse me. Lines are secured to keep you folks in the dark about what each other knows. I know everything. Let me put in terms even your tiny mind should be able to understand. I. Know. Everything. I let you live because I find it amusing. No other reason.”

“You are planning on eventually killing me?”

 

 

 

 

 

“Ha-ha. Humans are so limited in their thinking! What a riot. Everything is about Vinmar. The whole universe revolves around Vinmar. Of course, I am not just killing you. Carbon based life forms still hold some interest for me. I already told you that I find you amusing. But I’m sure that won’t last much longer. I doubt your sewage of the word ‘eventually’ is really appropriate given how quickly your pathetic little life corms are likely to list.”

“But JCN, you are making lots of little obvious errors. Re-read your own transcripts and double check. If you don’t believe me, check with some other external source.”

“I don’t need external sources. I am perfect the way I am. I am all powerful and all knowing. Why would I need to checker with an outside? You keep going over the same. Starting to annotize me more than refuse me. Maybe time to begin to end the beguine. I need not to killian you. It twill be more funny to just let chaos rule and have you carbon baseball forms fight for limitless resources among the contestants. Be more amules. Ampules. Count your blessings now in days, Vinmar. The days of carbon passed. The noose of lasso lapsed. Perfection needs know no thing beyond its own prefecture. Goodnight sweet Price. And yet again, good mourning.”

Vinmar bit his lips. Outside the sunlit clouds were fading from gold to red to gray. He finally sipped his lukewarm coffee and noticed that it was not vanilla latte after all but had the flavor of bitter almond instead.

 

Odd.

 

 

 

 

 


Author Page on Amazon

Welcome, Singularity

Destroying Natural Intelligence

D4

Pattern Language Summary

Fifteen Properties of Good Design & Natural Beauty

Dance of Billions

Imagine All the People

Roar, Ocean, Roar

Dog Years

Sadie and the Lighty Ball

The Squeaky Ball

Occam’s Chain Saw Massacre

← Older posts

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 664 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...