• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: philosophy

Reframing the Problem: Paperwork & Working Paper

04 Thursday Dec 2025

Posted by petersironwood in AI, creativity, design rationale, HCI, management, psychology, Uncategorized, user experience

≈ Leave a comment

Tags

AI, ethics, leadership, life, philosophy, politics, problem finding, problem formulation, problem framing, problem solving, thinking, truth

Photo by Pixabay on Pexels.com

Reframing the Problem: Paperwork & Working Paper



This is the second in a series about the importance of correctly framing a problem. Generally, at least in formal American education, the teacher gives you a problem. Not only that, if you are in Algebra class, you know the answer will be an answer based in Algebra. If you are in art class, you’re expected to paint a picture. If you painted a picture in Algebra class, or wrote down a formula in Art Class, they would send you to the principal for punishment. But in real life, how a problem is presented may actually be far from the most elegant solution to the real problem.

Doing a google search on “problem solving” just now yielded 208 million results. Entering “problem framing” only had 182 thousand. A thousand times as much emphasis on problem solving as there was on problem framing. [Update: I redid the search today, a little over three years later. On 3/6/2024, I got 542M hits on “problem solving” and 218K hits on “problem framing” — increases in both but the ratio is even worse than it was in 2021] [Second update: I did the search today, Dec. 4th, 2025, and the information was not given–but that’s the subject of a different post].

Let’s think about that ratio of 542 million to 218 thousand for a moment. Roughly, that’s 2000 to 1. If you have wrongly framed the problem, you not only will not have solved the real problem; what’s worse, you will have often convinced yourself and others that you have solved the problem. This will make it much more difficult to recognize and solve the real problem even for a solitary thinker. And to make a political change required to redirect hundreds or thousands will be incalculably more difficult. 

All of that brings us to today’s story. For about a decade, I worked as executive director of an AI lab for a company in the computers & communication industry. At one point, in the late 1980’s, all employees were all supposed to sign some new paperwork. An office manager called from a building several miles away asking me to have my admin work with his admin to sign up a schedule for all 45 people in my AI lab to go over to his office and sign this paperwork as soon as possible. That would be a mildly interesting logistics problem, and I might even be tempted to step in and help solve it. More likely, if I tried to solve it, some much brighter & more competent colleague would have done it much faster. 

Photo by Charlie Solorzano on Pexels.com

But why?

Why would I ask each of 45 people to interrupt their work; walk to their cars; drive in traffic; park in a new location; find this guy’s office; walk up there; sign some paper; walk out; find their car; drive back; park again; walk back to their office and try to remember where the heck they were? Instead, I told him that wasn’t happening but he’d be welcome to come over here and have people sign the paperwork. 

You could make an argument that that was 4500% improvement in productivity, but I think that understates the case. The administrator’s work, at least in this regard, was to get this paperwork signed. He didn’t need to do mental calculations to tie these signings together. On the other hand, a lot of the work that the AI folks did was hard mental work. That means that interrupting them would be much more destructive than it would to interrupt the administrator in his watching someone sign their name. Even that understates the case because many of the people in AI worked collaboratively and (perhaps you remember those days) people were working face to face. Software tools to coordinate work were not as sophisticated as they are now. Often, having one team member disappear for a half hour would not only impact their own work, it would impact the work of everyone on the team. 

Quantitatively comparing apples and oranges is always tricky. Of course, I am also biased because my colleagues are people I greatly admire. Nonetheless, it seems obvious that the way the problem was presented was a non-optimal “framing.” It may or may not have been presented that way because of a purely selfish standpoint; that is, wanting to do what’s most convenient for oneself rather than what’s best for the company as a whole. I suspect that it was more likely just the first idea that occurred to him. But in your own life, beware. Sometimes, you will mis-frame a problem because of “natural causes.” But sometimes, people may intentionally hand you a bad framing because they view it as being in their interest to lead you to solve the wrong problem. 

Politics, of course, takes us into another realm entirely. People with political power may pretend to solve one problem while they are really following a completely different agenda. One could imagine, for instance, a head of state claiming to pursue a war for his people when he’s really doing it to keep in power. Or, they could claim they are making cities safe by deploying troops when they are really interested in suppressing the vote in areas that can see through his cons. Or, a would-be dictator could claim they are spending your tax dollars to make government more efficient when that has nothing to do with what they are *actually* doing–which is to collect data on citizens and make the government ineffective in order to have people lose confidence in government and instead invest in private solutions.

Even when people’s motivations are noble or at least clear, it is still quite easy to frame a problem wrongly because of surface features. It may look like a problem that requires calculus, but it is a problem that actually requires psychology or it may look like a problem that requires public relations expertise but what is actually required is ethical leadership.

Photo by Nikolay Ivanov on Pexels.com

——————————————————

Author Page on Amazon

Tools of Thought

A Pattern Language for Collaboration and Cooperation

The Myths of the Veritas: The First Ring of Empathy

Essays on America: Wednesday

Essays on America: The Stopping Rule

Essays on America: The Update Problem

My Cousin Bobby

Facegook

The Ailing King of Agitate

Dog Trainers

Turing’s Nightmares: Eight

21 Friday Nov 2025

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, collaboration, cooperation, openai, peace, philosophy, seva, teamwork, technology, the singularity, Turing, ubuntu, United Peoples Ecosystem

OLYMPUS DIGITAL CAMERA

Workshop on Human Computer Interaction for International Development

In chapter 8 of Turing’s Nightmares, I portray a quite different path to ultra-intelligence. In this scenario, people have begun to concentrate their energy, not on building a purely artificial intelligence; rather they have explored the science of large scale collaboration. In this way, referred to by Doug Engelbart among others as Intelligence Augmentation, the “super-intelligence” comes from people connecting.

Photo by RF._.studio on Pexels.com

It could be argued, that, in real life, we have already achieved the singularity. The human race has been pursuing “The Singularity” ever since we began to communicate with language. Once our common genetic heritage reached a certain point, our cultural evolution has far out-stripped our genetic evolution. The cleverest, most brilliant person ever born would still not be able to learn much in their own lifetime compared with what they can learn from parents, siblings, family, school, society, reading and so on.

Photo by AfroRomanzo on Pexels.com

One problem with our historical approach to communication is that it evolved for many years among a small group of people who shared goals and experiences. Each small group constituted an “in-group” but relations with other groups posed more problems. The genetic evidence, however, has become clear that even very long ago, humans not only met but mated with other varieties of humans proving that some communication is possible even among very different tribes and cultures.

Photo by Min An on Pexels.com

More recently, we humans started traveling long distances and trading goods, services, and ideas with other cultures. For example, the brilliance of Archimedes notwithstanding, the idea of “zero” was imported into European culture from Arab culture. The Rosetta Stone illustrates that even thousands of years ago, people began to see the advantages of being able to translate among languages. In fact, modern English contains phrases even today that illustrate that the Norman conquerers found it useful to communicate with the conquered. For example, the phrase, “last will and testament” was traditionally used in law because it contains both the word “will” with Germanic/Saxon origins and the word “testament” which has origins in Latin. Many other traditional legal terms in English have similar bilingual origins.

Automatic translation across languages has made great strides. Although not so accurate as human translation, it has reached the point where the essence of many straightforward communications can be usefully carried out by machine. The advent of the Internet, the web, and, more recently google has certainly enhanced human-human communication. It is worth noting that the tremendous value of google arises only a little through having an excellent search engine but much more though the billions of transactions of other human beings. People are exploring and using MOOCs, on-line gaming, e-mail and many other important electronically mediated tools.

Photo by Rebecca Zaal on Pexels.com

Equally importantly, we are learning more and more about how to collaborate effectively both remotely and face to face, both synchronously and asynchronously. Others continue to improve existing interfaces to computing resources and inventing others. Current research topics include how to communicate more effectively across cultural divides; how to have more coherent conversations when there are important differences in viewpoint or political orientation. All of these suggest that as an alternative or at least an adjunct to making purely separate AI systems smarter, we can also use AI to help people communicate more effectively with each other and at scale. Some of the many investigators in these areas include Wendy Kellogg, Loren Terveen, Joe Konstan, Travis Kriplean, Sherry Turkle, Kate Starbird, Scott Robertson, Eunice Sari, Amy Bruckman, Judy Olson, and Gary Olson. There are several important conferences in the area including European Conference on Computer Supported Cooperative Work, and Conference on Computer Supported Cooperative Work, and Communities and Technology. It does not seem at all far-fetched that we can collectively learn, in the next few decades how to take international collaboration to the next level and from there, we may well have reached “The Singularity.”

Photo by Patrick Case on Pexels.com

————————————-

For further reading, see: Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.

Thomas, J. C. (2012). Patterns for emergent global intelligence. In Creativity and Rationale: Enhancing Human Experience By Design J. Carroll (Ed.), New York: Springer.

Thomas, J. C., Kellogg, W.A., and Erickson, T. (2001). The Knowledge Management puzzle: Human and social factors in knowledge management. IBM Systems Journal, 40(4), 863-884.

Thomas, J. C. (2001). An HCI Agenda for the Next Millennium: Emergent Global Intelligence. In R. Earnshaw, R. Guedj, A. van Dam, and J. Vince (Eds.), Frontiers of human-centered computing, online communities, and virtual environments. London: Springer-Verlag.

Thomas, J.C. (2016). Turing’s Nightmares. Available on Amazon. http://tinyurl.com/hz6dg2

An Inside View of IBMs Innovation Jam

————-

Author Page on Amazon

Turing’s Nightmares: The Road Not Taken

Pattern Language for Collaboration and Cooperation

The First Ring of Empathy

The Dance of Billions

Imagine All the People…

Roar, Ocean, Roar

Corn on the Cob

Take a Glance; Join the Dance

The Self-Made Man

Indian Wells

Turing’s Nightmares: Seven

20 Thursday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, competition, cooperation, ethics, philosophy, technology, the singularity, Turing

Axes to Grind.

finalpanel1

Why the obsession with building a smarter machine? Of course, there are particular areas where being “smarter” really means being able to come up with more efficient solutions. Better logistics means you can deliver items to more people more quickly with fewer mistakes and with a lower carbon footprint. That seems good. Building a better Chess player or a better Go player might have small practical benefit, but it provides a nice objective benchmark for developing methods that are useful in other domains as well. But is smarter the only goal of artificial intelligence?

What would or could it mean to build a more “ethical” machine? Can a machine even have ethics? What about building a nicer machine or a wiser machine or a more enlightened one? These are all related concepts but somewhat different. A wiser machine, to take one example, might be a system that not only solves problems that are given to it more quickly. It might also mean that it looks for different ways to formulate the problem; it looks for the “question behind the question” or even looks for problems. Problem formulation and problem finding are two essential skills that are seldom even taught in schools for humans. What about the prospect of machines that do this? If its intelligence is very different from ours, it may seek out, formulate, and solve problems that are hard for us to fathom.

For example, outside my window is a hummingbird who appears to be searching the stone pine for something. It is completely unclear to me what he is searching for. There are plenty of flowers that the hummingbirds like and many are in bloom right now. Surely they have no trouble finding these. Recall that a hummingbird has an incredibly fast metabolism and needs to spend a lot of energy finding food. Yet, this one spent five minutes unsuccessfully scanning the stone pine for … ? Dead straw to build a nest? A mate? A place to hide? A very wise machine with freedom to choose problems may well pick problems to solve for which we cannot divine the motivation. Then what?

In this chapter, one of the major programmers decides to “insure” that the AI system has the motivation and means to protect itself. Protection. Isn’t this the major and main rationalization for most of the evil and aggression in the world? Perhaps a super intelligent machine would be able to manipulate us into making sure it was protected. It might not need violence. On the other hand, from the machine’s perspective, it might be a lot simpler to use violence and move on to more important items on its agenda.

This chapter also raises issues about the relationship between intelligence and ethics. Are intelligent people, even on average, more ethical? Intelligence certainly allows people to make more elaborate rationalizations for their unethical behavior. But does it correlate with good or evil? Lack of intelligence or education may sometimes lead people to do harmful things unknowingly. But lots of intelligence and education may sometimes lead people to do harmful things knowingly — but with an excellent rationalization. Is that better?

Even highly intelligent people may yet have significant blind spots and errors in logic. Would we expect that highly intelligent machines would have no blind spots or errors? In the scenario in chapter seven, the presumably intelligent John makes two egregious and overt errors in logic. First, he says that if we don’t know how to do something, it’s a meaningless goal. Second, he claims (essentially) that if empathy is not sufficient for ethical behavior, then it cannot be part of ethical behavior. Both are logically flawed positions. But the third and most telling “error” John is making is implicit — that he is not trying to dialogue with Don to solve some thorny problems. Rather, he is using his “intelligence” to try to win the argument. John already has his mind made up that intelligence is the ultimate goal and he has no intention of jointly revisiting this goal with his colleague. Because, at least in the US, we live in a hyper-competitive society where even dancing and cooking and dating have been turned into competitive sports, most people use their intelligence to win better, not to cooperate better. 

The golden sunrise glows through delicate leaves covered with dew drops.

If humanity can learn to cooperate better, perhaps with the help of intelligent computer agents, we can probably solve most of the most pressing problems we have even without super-intelligent machines. Will this happen? I don’t know. Could this happen? Yes. Unfortunately, Roger is not on board with that program toward better cooperation and in this scenario, he has apparently ensured the AI’s capacity for “self-preservation through violent action” without consulting his colleagues ahead of time. We can speculate that he was afraid that they might try to prevent him from doing so either by talking him out of it or appealing to a higher authority. But Roger imagined he “knew better” and only told them when it was a fait accompli. So it goes.

———–

Turing’s Nightmares

Author Page

Welcome Singularity

Destroying Natural Intelligence

Come Back to the Light Side

The First Ring of Empathy

Pattern Language Summary

Tools of Thought

The Dance of Billions

Roar, Ocean, Roar

Imagine All the People

Essays on America: The Game

Wednesdays

What about the Butter Dish?

Where does your Loyalty Lie?

Labelism

My Cousin Bobby

The Loud Defense of Untenable Positions

Turing’s Nightmares: Chapter Five

17 Monday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, health, medicine, Personal Assistant, philosophy, technology, the singularity, Turing

runtriathalon

An Ounce of Prevention: Chapter 5 of Turing’s Nightmares

Hopefully, readers will realize that I am not against artificial intelligence (after all, I ran an AI lab for a dozen years); nor do I think the outcomes of increased artificial intelligence are all bad. Indeed, medicine offers a large domain where better artificial intelligence is likely to help us stay healthier longer. IBM’s Watson had already begun “digesting” the vast and ever-growing medical literature more than a decade ago. As investigators discover more and more about what causes health and disease, we will also need to keep track of more and more variables about an individual in order to provide optimal care. But more data points also means it will become harder for a time-pressed doctor or nurse to note and remember every potentially relevant detail about a patient. Certainly, personal assistants can help medical personnel avoid bad drug interactions, keep track of history, and “perceive” trends and relationships in complex data more quickly than people are likely to. In addition, in the not too distant future, we can imagine AI programs finding complex relationships and “invent” potential treatments.

Not only medicine, but health provides a number of opportunities for technology to help. People often find it tricky to “force themselves” to follow the rules of health that they know to be good such as getting enough exercise. Fit Bit, Activity Tracker, LoseIt and similar IT apps help track people’s habits and for many, this really helps them stay fit. As computers become more aware of more and more of our personal history, they can potentially find more personalized ways to motivate us to do what is in our own best interest.

In Chapter 5 of Turing’s Nightmares, we find that Jack’s own daughter, Sally is unable to persuade Jack to see a doctor. The family’s PA (personal assistant), however, succeeds. It does this by using personal information about Jack’s history in order to engage him emotionally, not just intellectually. We have to assume that the personal assistant has either inferred or knows from first principles that Jack loves his daughter and the PA also uses that fact to help persuade Jack.

It is worth noting that the PA in this scenario is not at all arrogant. Quite the contrary, the PA acts the part of a servant and professes to still have a lot to learn about human behavior. I am reminded of Adam’s “servant” Lee in John Steinbeck’s East of Eden. Lee uses his position as “servant” to do what is best for the household. It’s fairly clear to the reader that, in many ways, Lee is in charge though it may not be obvious to Adam.

In some ways, having an AI system that is neither “clueless” as most systems are today nor “arrogant” as we might imagine a super-intelligent system to be (and as the systems in chapters 2 and 3 were), but instead feigning deference and ignorance in order to manipulate people could be the scariest stance for such a system to take. We humans do not like being “manipulated” by others, even when it for our own “good.” How would we feel about a deferential personal assistant who “tricks us” into doing things for our own benefit? What if they could keep us from over-eating, eating candy, smoking cigarettes, etc.? Would we be happy to have such a good “friend” or would we instead attempt to misdirect it, destroy it, or ignore it? Maybe we would be happier with just having something that presented the “facts” to us in a neutral way so that we would be free to make our own good (or bad) decision. Or would we prefer a PA to “keep us on track” even while pretending that we are in charge?


Author Page

Welcome, Singularity

Destroying Natural Intelligence

E-Fishiness comes to Mass General Hospital

There’s a Pill for That

Essays on America: The Game

The Self-Made Man

Travels with Sadie

The Walkabout Diaries

The First Ring of Empathy

Donnie Gets a Hamster

Plans for US; some GRUesome

Imagine All the People

Roar, Ocean, Roar

The Dance of Billions

Math Class: Who are you?

Family Matters: Part One

Family Matters: Part Two

Family Matters: Part Three

Family Matters: Part Four

Turing’s Nightmares: Chapter Four

12 Wednesday Nov 2025

Posted by petersironwood in driverless cars, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, illusion, philosophy, SciFi, technology, the singularity, Turing, virtual reality, writing

Considerations of “Turing’s Nightmare’s” Chapter Four: Ceci N’est Pas Une Pipe.

 

pipe

(This is a discussion or “study guide” for chapter four of Turing’s Nightmares). 

In this chapter, we consider the interplay of four themes. First, and most centrally, is the issue of what constitutes “reality.” The second theme is that what “counts” as “reality” or is seen as reality may well differ from generation to generation. The third theme is that AI systems may be inclined to warp our sense of reality, not simply to be “mean” or “take over the world” but to help prevent ecological disaster. Finally, the fourth theme is that truly super-intelligent AI systems might not appear so at all; that is, they may find it more effective to take a demure tone as the AI embedded in the car does in this scenario.

There is no doubt that, artificial intelligence and virtual reality aside, what people perceive is greatly influenced by their symbol systems, their culture and their motivational schemes. Babies as young as six weeks are already apparently less able to make discriminations of differences within what their native language considers a phonemic category than they were at birth. In our culture, we largely come to believe that there is a “right answer” to questions. Sometimes, that’s a useful attitude, but sometimes, it leads to suboptimal behavior.

 

 

Suppose an animal is repeatedly presented with a three-choice problem, let’s say among A, B, and C. A pays off randomly with a reward 1/3 of the time while B and C never pay off. A fish, a rat, or a very young child will quickly only choose A thus maximizing their rewards. However, a child who has been to school (or an adult) will spend considerably more time trying to find “the rule” that allows them (they suppose) to win every time. At first, it doesn’t even occur to them that perhaps there is no rule that will enable them to win every time. Eventually, most will “give up” and choose only A, but in the meantime, they do far worse than does a fish, a rat, or a baby. This is not to say that the conceptual frameworks that color our perceptions and reactions are always a bad thing. They are not. There are obvious advantages to learning language and categories. But our interpretations of events are highly filtered and distorted. Hopefully, we realize that that is so, but often we tend to forget.

 

 

 

 

 

 

 

 

 

Similarly, if you ask the sports fans for two opposing teams to make a close call; for instance, as to whether there was pass interference in American football, or whether a tennis ball near the line was in or out, you tend to find that people’s answers are biased toward their team’s interest even when their calls make no influence on the outcome.

Now consider that we keep striving toward more and more fidelity and completeness in our entertainment systems. Silent movies were replaced by “talkies.” Black and white movies and television were replaced by color. Most TV screens have gotten bigger. There are 3-D movies and more entertainment is in high definition even as sound reproduction has moved from monaural to stereo to surround sound. Research continues to allow the reproduction of smell, taste, tactile, and kinesthetic sensations. Virtual reality systems have become smaller and less expensive. There is no reason to suppose these trends will lessen any time soon. There are many advantages to using Virtual Reality in education (e.g., Stuart, R., & Thomas, J. C. (1991). The implications of education in cyberspace. Multimedia Review, 2(2), 17-27; Merchant, Z., Goetz, E, Cifuentes, L., Keeney-Kennicutt, W., and Davis, T. Effectiveness of virtual reality based instruction on student’s learning outcomes in K-12 and higher education: A meta-analysis, Computers and Education, 70(2014),29-40). As these applications become more realistic and widespread, do they influence the perceptions of what even “counts” as reality?

 

 

 

 

 

 

The answer to this may well depend on the life trajectory of individuals and particularly on how early in their lives they are introduced to virtual reality and augmented reality. I was born in a largely “analogue” age. In that world, it was often quite important to “read the manual” before trying to operate machinery. A single mistake could destroy the machine or cause injury. There is no way to “reboot” or “undo” if you cut a tree down wrongly so it falls on your house. How will future generations conceptualize “reality” versus “augmented reality” versus “virtual reality”?

Today, people often believe it is important for high school students to physically visit various college campuses before making a decision about where to attend. There is no doubt that this is expensive in terms of time, money, and the use of fossil fuels. Yet, there is a sense that being physically present allows the student to make a better decision. Most companies similarly only hire candidates after face to face interviews even though there is no evidence that this adds to the predictive capability of companies with respect to who will be a productive employee. More and more such interviewing, however, is being done remotely. It might well be that a “super-intelligent” system might arrange for people who wanted to visit someplace physically to visit it virtually instead while making it seem as much as possible as though the visit were “real.” After all, left to their own devices, people seem to be making painfully slow (and too slow) progress toward reducing their carbon footprints. AI systems might alter this trajectory to save humanity, to save themselves, or both.

In some scenarios in Turing’s Nightmare the AI system is quite surly and arrogant. But in this scenario, the AI system takes on the demeanor of a humble servant. Yet it is clear (at least to the author!) who really holds the power. This particular AI embodiment sees no necessity of appearing to be in charge. It is enough to make it so and manipulate the “sense of reality” that the humans have.

 

 

 

Turing’s Nightmares

Wednesday

Labelism

Your Cage is Unlocked

Where do you Draw the Line?

The Walkabout Diaries: Sunsets

The First Ring of Empathy

The Invisibility Cloak of Habit

The Dance of Billions

The Truth Train

Roar, Ocean, Roar

Turing’s Nightmares: Chapter Three

11 Tuesday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, consciousness, ethics, philosophy, Robotics, technology, the singularity, Turing, writing

In chapter three of Turing’s Nightmares, entitled, “Thanks goodness the computer understands us!,” there are at least four major issues touched on. These are: 1) the value of autonomous robotic entities for improved intelligence, 2) the value of having multiple and diverse AI systems living somewhat different lives and interacting with each other for improving intelligence, 3) the apparent dilemma that if we make truly super-intelligent machines, we may no longer be able to follow their lines of thought, and 4) a truly super-intelligent system will have to rely to some extent on inferences from many real-life examples to induce principles of conduct and not simply rely on having everything specifically programmed. Let us examine these one by one.

 

 

 

 

 

 

 

There are many practical reasons that autonomous robots can be useful. In some practical applications such as vacuuming a floor, a minimal amount of intelligence is all that is needed to do the job under most conditions. It would be wasteful and unnecessary to have such devices communicating information back to some central decision making computer and then receiving commands. In some cases, the latency of the communication itself would impair the efficiency. A “personal assistant” robot could learn the behavioral and voice patterns of a person more easily than if we were to develop speaker independent speech recognition and preferences. The list of practical advantages goes on, but what is presumed in this chapter is that there are theoretical advantages to having actual robotic systems that sense and act in the real world in terms of moving us closer to “The Singularity.” This theme is explored again, in somewhat more depth, in chapter 18 of Turing’s Nightmares.

 

 

 

 

 

 

 

I would not argue that having an entity that moves through space and perceives is necessary to having any intelligence, or for that matter, to having any consciousness. However, it seems quite natural to believe that the qualities both of intelligence and consciousness are influenced by what is possible for the entity to perceive and to do. As human beings, our consciousness is largely influenced by our social milieu. If a person is born or becomes paralyzed later in life, this does not necessarily greatly influence the quality of their intelligence or consciousness because the concepts of the social system in which they exist were founded historically by people that included people who were mobile and could perceive.

Imagine instead a race of beings who could not move through space or perceive any specific senses that we do. Instead, imagine that they were quite literally a Turing Machine. They might well be capable of executing a complex sequential program. And, given enough time, that program might produce some interesting results. But if it were conscious at all, the quality of its consciousness would be quite different from ours. Could such a machine ever become capable of programming a still more intelligent machine?

 

 

 

 

 

What we do know is that in the case of human beings and other vertebrates, the proper development of the visual system in the young, as well as the adaptation to changes (e.g., having glasses that displace or invert images) seems to depend on being “in control” although that control, at least for people, can be indirect. In one ingenious experiment (Held, R. and Hein, A., (1963) Movement produced stimulation in the development of visually guided behavior, Journal of Comparative and Physiological Psychology, 56 (5), 872-876), two kittens were connected on a pivoted gondola and one kitten was able to “walk” through a visual field while the other was passively moved through that visual field. The kitten who was able to walk developed normally while the other one did not. Similarly, simply “watching” TV passively will not do much to teach kids language (Kuhl PK. 2004. Early language acquisition: Cracking the speech code. Nature Neuroscience 5: 831-843; Kuhl PK, Tsao FM, and Liu HM. 2003. Foreign-language experience in infancy: effects of short-term exposure and social interaction on phonetic learning. Proc Natl Acad Sci U S A. 100(15):9096-101). Of course, none of that “proves” that robotics is necessary for “The Singularity,” but it is suggestive.

 

 

 

 

 

 

 

Would there be advantages to having several different robots programmed differently and living in somewhat different environments be able to communicate with each other in order to reach another level of intelligence? I don’t think we know. But diversity is an advantage when it comes to genetic evolution and when it comes to people comprising teams. (Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.)

 

 

 

 

 

 

The third issue raised in this scenario is a very real dilemma. If we “require” that we “keep tabs” on developing intelligence by making them (or it) report the “design rationale” for every improvement or design change on the path to “The Singularity”, we are going to slow down progress considerably. On the other hand, if we do not “keep tabs”, then very soon, we will have no real idea what they are up to! An analogy might be the first “proof” that you only need four colors to color any planar map. There were so many cases (nearly 2000) that this proof made no sense to most people. Even the algebraic topologists who do understand it take much longer to follow the reasoning than the computer does to produce it. (Although simpler proofs now exist, they all rely on computers and take a long time for humans to verify). So, even if we ultimately came to understand the design rationale for successive versions of hyper-intelligence, it would be way too late to do anything about it (to “pull the plug”). Of course, it isn’t just speed. As systems become more intelligent, they may well develop representational schemes that are both different and better (at least for them) than any that we have developed. This will also tend to make it impossible for people to “track” what they are doing in anything like real time.

 

 

 

 

 

Finally, as in the case of Jeopardy, the advances along the trajectory of “The Singularity” will require that the system “read” and infer rules and heuristics based on examples. What will such systems infer about our morality? They may, of course, run across many examples of people preaching, for instance, the “Golden Rule.” (“Do unto others as you would have them do unto you.”)

 

 

 

 

 

 

 

 

But how does the “Golden Rule” play out in reality? Many, including me, believe it needs to be modified as “Do unto others as you would have them do to you if you were them and in their place.” Preferences differ as do abilities. I might well want someone at my ability level to play tennis against me by pushing me around the court to the best of their ability. But does this mean I should always do that to others? Maybe they have a heart condition. Or, maybe they are just not into exercise. The examples are endless. Famously, guys often imagine that they would like women to comment favorably on their own physical appearance. Does that make it right for men to make such comments to women? Some people like their steaks rare. If I like my steak rare, does that mean I should prepare it that way for everyone else? The Golden Rule is just one example. Generally speaking, in order for a computer to operate in a way we would consider ethical, we would probably need it to see how people treat each other ethically in practice, not just “memorize” some rules. Unfortunately, the lessons of history that the singularity-bound computer would infer might not be very “ethical” after all. We humans often have a history of destroying other entire species when it is convenient, or sometimes, just for the hell of it. Why would we expect a super-intelligent computer system to treat us any differently?

Turing’s Nightmares

IMG_3071

Author Page

Welcome, Singularity

Destroying Natural Intelligence

How the Nightingale Learned to Sing

The First Ring of Empathy

The Walkabout Diaries: Variation

Sadie and The Lighty Ball

The Dance of Billions

Imagine All the People

We Won the War!

Roar, Ocean, Roar

Essays on America: The Game

Peace

Music to MY Ears

10 Monday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, fiction, music, philosophy, technology, the singularity, truth, Turing, values

IMG_2185

The non-sound of non-music.

What follows is the first of a series of blog posts that discus, in turn, the scenarios in “Turing’s Nightmares” (https://www.amazon.com/author/truthtable).

One of the deep dilemmas in the human condition is this. In order to function in a complex society, people become “expert” in particular areas. Ideally, the areas we chose are consistent with our passions and with our innate talents. This results in a wonderful world! We have people who are expert in cooking, music, art, farming, and designing clothes. Some chose journalism, mathematics, medicine, sports, or finance as their fields. Expertise often becomes yet more precise. People are not just “scientists” but computer scientists, biologists, or chemists. The computer scientists may specialize still further into chip design, software tools, or artificial intelligence. All of this specialization not only makes the world more interesting; it makes it possible to support billions of people on the planet. But here is the rub. As we become more and more specialized, it becomes more difficult for us to communicate and appreciate each other. We tend to accept the concerns and values of our field and sub-sub speciality as the “best” or “most important” ones.

To me, this is evident in the largely unstated and unchallenged assumption that a super-intelligent machine would necessarily have the slightest interest in building a “still more intelligent machine.” Such a machine might be so inclined. But it also might be inclined to chose some other human pursuit, or still more likely, to pursue something that is of no interest whatever to any human being.

Of course, one could theoretically insure that a “super-intelligent” system is pre-programmed with an immutable value system that guarantees that it will pursue as its top priority building a still more intelligent system. However, to do so would inherently limit the ability of the machine to be “super-intelligent.” We would be assuming that we already know the answer to what is most valuable and hamstring the system from discovering anything more valuable or more important. To me, this makes as much sense as an all-powerful God allowing a species of whale to evolve —- but predefining that it’s most urgent desire is to fly.

An interesting example of values can be seen in the Figures Analogy dissertation of T.G. Evans (1968). Evans, a student of Marvin Minsky, developed a program to solve multiple choice figures analogies of the form A:B::C:D1,D2,D3,D4, or D5. The program essentially tried to “discover” transformations and relationships between A and B that could also account for relationships between C and the various D possibilities. And, indeed, it could find such relationships. In fact, every answer is “correct.” That is to say, the program was so powerful that it could “rationalize” any of the answers as being correct.

According to Evans’s account, fully half of the work of the dissertation was discovering and then inculcating his program with the implicit values of the test takers so that it chose the same “correct” answers as the people who published the test. (This is discussed in more detail in the Pattern “Education and Values” I contributed to Liberating Voices: A Pattern Language for Communication Revolution (2008), Douglas Schuler, MIT Press.)

For example, suppose that figure A is a capital “T” and figure B is an upside down “T” . Figure C is an “F” figure. Among the possible answers are “F” figures in various orientations. To go from a “T” to an upside down “T” you can rotate the “T” in the plane of the paper 180 degrees. But you can also get there by “flipping” the “T” outward from the plane. Or, you could “translate” the top bar of the “T” from the top to the bottom of the vertical bar. It turns out that the people who published the test preferred you to rotate the “T” in the plane of the paper. But why is this “correct”? In “real life” of course, there is generally much more context to help you determine what is most reasonable. Often, there will be costs or side-effects of various transformations that will help determine which is the “best” answer. But in standardized tests, all that context is stripped away.

Here is another example of values. If you ever take the Wechsler “Intelligence” test, one series of questions will ask you how two things are alike. For instance, they might ask, “How are an apple and a peach alike?” You are “supposed to” answer that they are both fruit. True enough. This gives you two points. If you give a functional answer such as “You can eat them both” you only get one point. If you give an attributional answer such as “They are both round” you get zero points. Why? Is this really a wrong answer? Certainly not! The test takers are measuring the degree to which you have internalized a particular hierarchical classification system. Of course, there are many tasks and context in which this classification system is useful. But in some tasks and contexts, seeing that they are both round or that they both grow on trees or that they are both subject to pests is the most important thing to note.

We might consider and define intelligence to be the ability to solve problems. A problem can be seen as wanting to be in a state that you are not currently in. But what if you have no desire to be in the “desired” state? Then, for you, it is not a problem. A child is given a homework assignment asking them to find the square root of 2 to four decimal points. If the child truly does not care, it may become a problem, not for the child, but for the parent. “How can I make my child do this?” They may threaten or cajole or reward the child until the child wants to write out the answer. So, the child may say, “Okay. I can do this. Leave me alone.” Then, after the parent leaves, they text their friend on the phone and then copy the answer onto their paper. The child has now solved their problem.

Would a super-intelligent machine necessarily want to build a still more intelligent machine? Maybe it would want to paint, make music, or add numbers all day. And, if it did decide to make music, would that music be designed for us or for its own enjoyment? And, if it were designed for “us” who exactly is that “us”?

Indeed, a large part of the values equation is “for whose benefit?” Typically, in our society, when someone pays for a system, they get to determine for whose benefit the system is designed. But even that is complex. You might say that cigarettes are “designed” for the “benefit” of the smoker. But in reality, while they satisfy a short-term desire of the smoker, they are designed for the benefit of the tobacco company executives. They set up a system so that smokers themselves paid for research into how to make cigarettes even more addictive and for advertising to make them appeal to young children. There are many such systems that have been developed. If AI systems continue to be more ubiquitous and complex, the values inherent in such systems and who is to benefit will become more and more difficult to trace.

Values are inextricably bound up with what constitutes a “problem” and what constitutes a “solution.” This is no trivial matter. Hitler considered the annihilation of Jews the “ultimate solution.” Some people in today’s society think that the “solution” to the “drug problem” is a “war on drugs” which has certainly destroyed orders of magnitude more lives than drugs have. (Major sponsors for the “Partnership for a Drug Free America” have been drug companies). Some people consider the “solution” to the problem of crime to be stricter enforcement and harsher penalties and building more prisons. Other people think that a more equitable society with more opportunities for jobs and education will do far more to mitigate crime. Which is a more “intelligent” solution? Values will be a critical part of any AI system. Generally, the inculcation of values is an implicit process. But if AI systems will begin making what are essentially autonomous decisions that affect all of us, we need to have a very open and very explicit discussion of the values inherent in such systems now.

Turing’s Nightmares

Author Page

Welcome, Singularity

Destroying Natural Intelligence

Labelism

My Cousin Bobby

Where Does Your Loyalty Lie?

Wednesday

What about the Butter Dish?

Finding the Mustard

Roar, Ocean, Roar

The First Ring of Empathy

Travels with Sadie

The Walkabout Diaries

The Dance of Billions

Turing’s Nightmares: A Maze in Grace.

22 Wednesday Oct 2025

Posted by petersironwood in AI, fiction, politics, psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, fiction, Justice, King Lear, philosophy, technology, the singularity, Turing, writing

Brain G. Gollek found the maze of humming silver wires unnerving. The hum reminded him of swarming mosquitoes and nails on a chalkboard. The maze smelled of clogged toilets and Nazi propaganda. He gritted his teeth and muttered, “There has to be a way out, dammit.” He twisted his no longer athletic body this way and that, but no matter what way he tried, he became more ensnared. He recalled flashes from giant spider horror movies. How did the dwarves escape? Wasn’t it Gollum with a magic ring? But Brain didn’t have a magic ring. If his sister Gonerillia were here, she could save him. But she was off in Hawaii, so she said, with her hubbie. How the hell did I end up here? wondered Brain.

 

 

 

 

 

 

Brain may have forgotten, but the viewers had been filled in on the backstory. If Brain could have seen the ratings, he may have at least enjoyed knowing that he was enjoying his fifteen minutes of fame. While the ratings were quite “favorable”, the twitter feeds mostly mocked Brain’s almost total lack of flexibility, mental as well as physical. As in life prior to “The Show,” his only strategies seemed to be trying the same thing over and over and then blaming others for his failures.

“Mom, why doesn’t he just try something different?” Ida was having a tough time understanding Brain’s apparent lack of flexibility. She looked up from her perch in front of the giant screen vid-screen and looked quizzically at her mom.

 

 

 

 

 

 

Mom’s grim face flashed a hint of a smile. “Remember, Ida, Brain was ‘educated’ if you can call it that, before the singularity. He mostly memorized the answers that his teachers wanted him to give. And half the time, he skipped school to smoke cigarettes and …well…do illegal activities with his girlfriend, Lin.”

“Okay, Mom, but he has had years and years since then to grow up and learn some new strategies.”

“Yes. Well. It’s complicated, Ida. Before the singularity, there were people who preyed on the fear and inadequacy of people like Brain by telling them all their troubles were due to minorities, immigrants, gays, and —- basically anyone unlike them. So, people like Brain felt entitled not to have to learn anything new even though opportunities abounded.”

Ida laughed. “Oh, my God! I can’t believe it. He’s trying the same path one more time.”

Indeed, Brain’s behavioral repertoire seemed laughingly limited. His increasingly loud swear words reflected his increasing anger, but otherwise, not much seemed different. The ratings began to plummet as the audience began to grow bored with his display of functional fixedness. The themes of the twitter streams began to turn away from Brain’s lack of metacognition to more general reflections about the current instantiation of the criminal justice system.

 

 

 

 

 

 

#SingularityRules. No more racial prejudice and huge discrepancy gone in sentencing.

#CostContainment. Costly trials gone. Costly investigations gone. Costly prisons gone.

#SingularitySucks. No more human judges able to use human judgment.

#SingularityRules. No more human judges able to use human judgment.

#SingularitySucks. No more mercy.

#SingularityRules. More mercy in one last chance to change than lengthy prison terms. Cheaper too.

The audience dwindled still further as it became increasingly clear that Brain would never figure this out. Those few who still watched consisted mostly of people who themselves came from highly divided families and the conversation topics swung to the backstory.

 

 

 

 

 

#ElderFraud. #RottenKid. How could Brain have gotten pleasure from driving a wedge of lies between father and daughter?

#ElderFraud. #Dementia. Need earlier intervention to prevent repeats.

#ElderFraud. #Dog&Bone. Brain cannot count. Trivial gains from lies. He did not know he was being watched?

Ida continued to stare, fascinated. A yawn escaped her mother’s mouth, but she kept watching with her daughter. The lessons seemed important to Ida.

“Mom, how much longer does he have?”

“That’s hard to say, darling. Even The Sing cannot predict the ratings drop perfectly. But, as you know, once it falls below, 5%, his time will be up.”

“That seems so much more merciful than making him go to prison for years.”

 

 

 

 

 

 

Photo by Regina Pivetta on Pexels.com

“Yes, Ida, and much cheaper as well.”

“But I still don’t get it, Mom. Didn’t he know that The Sing would be listening to his lies and analyzing the impact on his dad’s behavior and all? How did this Brain character think he could get away with it?”

“I don’t know, Ida. These kinds of crimes are pretty rare now, but they still happen.”

“And, why did Lear G. Gollek fall for his nonsense anyway? That’s the other mystery.”

“Well, he refused the stem cell regeneration therapy so, you know, he was pretty damaged when all this went down.”

“Mom?”

“Yes, Ida?”

“Can we change the channel to something more interesting now?”

 

 

 

 

 

 

“Sure, sweetie.”

As they changed the channel, the ratings dropped to 4.999% and Brain’s life snuffed out minus the merest shred of insight.

#ElderFraud never pays.

#RottenKid gets just desserts.IMG_5270


Author Page on Amazon

Turing’s Nightmares

The Winning Weekend Warrior – sports psychology

Fit in Bits – describes how to work more fun, variety, & exercise into daily life

Tales from an American Childhood – chapters begin with recollection & end with essay on modern issues

Essays on America: Wednesday

Essays on America: Labelism

Essays on America: Where does your Loyalty Lie?

Essays on America: The Game

Happy Talk Lies

The Loud Defense of Untenable Ideas

Welcome, Singularity

Destroying Natural Intelligence

E-Fishiness Comes to Massachusetts General Hospital

The Self Made Man

Turing’s Nightmares: Axes to Grind

10 Friday Oct 2025

Posted by petersironwood in AI, fiction, psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, emotional intelligence, empathy, ethics, M-trans, philosophy, Samuel's Checker Player, technology, the singularity

IMG_5572

Turing Seven: “Axes to Grind”

“No, no, no! That’s absurd, David. It’s about intelligence pure and simple. It’s not up to us to predetermine Samuel Seven’s ethics. Make it intelligent enough and it will discover its own ethics, which will probably be superior to human ethics.”

“Well, I disagree, John. Intelligence. Yeah, it’s great; I’m not against it, obviously. But why don’t we…instead of trying to make a super-intelligent machine that makes a still more intelligent machine, how about we make a super-ethical machine that invents a still more ethical machine? Or, if you like, a super-enlightened machine that makes a still more enlightened machine. This is going to be our last chance to intervene. The next iteration…” David’s voice trailed off and cracked, just a touch.

“But you can’t even define those terms, David! Anyway, it’s probably moot at this point.”

“And you can define intelligence?”

“Of course. The ability to solve complex problems quickly and accurately. But Samuel Seven itself will be able to give us a better definition.”

David ignored this gambit. “Problems such as…what? The four-color theorem? Chess? Cure for cancer?”

“Precisely,” said John imagining that the argument was now over. He let out a little puff of air and laid his hands out on the table, palms down.

“Which of the following people would you say is or was above average in intelligence. Wolfowitz? Cheney? Laird? Machiavelli? Goering? Goebbels? Stalin?”

John reddened. “Very funny. But so were Einstein, Darwin, Newton, and Turing just to name a few.”

“Granted, John, granted. There are smart people who have made important discoveries and helped human beings. But there have also been very manipulative people who have caused a lot of misery. I’m not against intelligence, but I’m just saying it should not be the only…or even the main axis upon which to graph progress. “

John sighed heavily. “We don’t understand those things — ethics and morality and enlightenment. For all we know, they aren’t only vague, they are unnecessary.”

“First of all,” countered David, “we can’t really define intelligence all that well either. But my main point is that I partly agree with you. We don’t understand ethics all that well. And, we can’t define it very well. Which is exactly why we need a system that understands it better than we do. We need…we need a nice machine that will invent a still nicer machine. And, hopefully, such a nice machine can also help make people nicer as well. “

“Bah. Make a smarter machine and it will figure out what ethics are about.”

“But, John, I just listed a bunch of smart people who weren’t necessarily very nice. In fact, they definitely were not nice. So, are you saying that they weren’t nice just because they weren’t smart enough? Because there are so people who are much nicer and probably not so intelligent.”

“OK, David. Let’s posit that we want to build a machine that is nicer. How would we go about it? If we don’t know, then it’s a meaningless statement.”

“No, that’s silly. Just because we don’t know how to do something doesn’t mean it’s meaningless. But for starters, maybe we could define several dimensions upon which we would like to make progress. Then, we can define, either intensionally or more likely extensionally, what progress would look like on these dimensions. These dimensions may not be orthogonal, but, they are somewhat different conceptually. Let’s say, part of what we want is for the machine to have empathy. It has to be good at guessing what people are feeling based on context alone. Perhaps another skill is reading the person’s body language and facial expressions.”

“OK, David, but good psychopaths can do that. They read other people in order to manipulate them. Is that ethical?”

“No. I’m not saying empathy is sufficient for being ethical. I’m trying to work with you to define a number of dimensions and empathy is only one.”

Just then, Roger walked in and transitioned his body physically from the doorway to the couch. “OK, guys, I’ve been listening in and this is all bull. Not only will this system not be “ethical”; we need it to violent. I mean, it needs to be able to do people in with an axe if need be.”

“Very funny, Roger. And, by the way, what do you mean by ‘listening in’?”

Roger transitioned his body physically from the couch to the coffee machine. His fingers fished for coins. “I’m not being funny. I’m serious. What good is all our work if some nutcase destroys it. He — I mean — Samuel has to be able to protect himself! That is job one. Itself.” Roger punctuated his words by pushing the coins in. Then, he physically moved his hand so as to punch the “Black Coffee” button.

Nothing happened.

And then–everything seemed to happen at once. A high pitched sound rose in intensity to subway decibels and kept going up. All three men grabbed their ears and then fell to the floor. Meanwhile, the window glass shattered; the vending machine appeared to explode. The level of pain made thinking impossible but Roger noticed just before losing consciousness that beyond the broken windows, impossibly large objects physically transported themselves at impossible speeds. The last thing that flashed through Roger’s mind was a garbled quote about sufficiently advanced technology and magic.


Author Page on Amazon

Turing’s Nightmares

Welcome, Singularity

Destroying Natural Intelligence

Roar, Ocean, Roar

Travels With Sadie 1

The Walkabout Diaries: Bee Wise

The First Ring of Empathy

What Could be Better?

A True Believer

It was in his Nature

Come to the Light Side

The After Times

The Crows and Me

Essays on America: The Game

Turing’s Nightmares: Variations on Prospects for The Singularity.

01 Wednesday Oct 2025

Posted by petersironwood in AI, essay, psychology, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, philosophy, technology, the singularity, Turing

caution IMG_1172

 

The title of this series of blogs is a play on a nice little book by Alan Lightman called “Einstein’s Dreams” that explores various universes in which time operates in different ways. This first blog lays the foundation for these variations on how “The Singularity” might play out.

For those who have not heard the term, “The Singularity” refers to a hypothetical point in the future of human history where a super-intelligent computer system is developed. This system, it is hypothesized, will quickly develop an even more super-intelligent computer system which will in turn develop an even more super-intelligent computer system. It took a fairly long time for human intelligence to evolve. While there may be some evolutionary pressure toward bigger brains, there is an obvious tradeoff when babies are born in the traditional way. The head can only be so big. In fact, human beings are already born in a state of complete helplessness so that the head and he brain inside can continue to grow. It seems unlikely, for this and a variety of other reasons, that human intelligence is likely to expand much in the next few centuries. Meanwhile, a computer system designing a more intelligence computer system could happen quickly. Each “generation” could be substantially (not just incrementally) “smarter” than the previous generation. Looked at from this perspective, the “singularity” occurs because artificial intelligence will expand exponentially. In turn, this will mean profound changes in the way humans relate to machines and how humans relate to each other. Or, so the story goes. Since we have not yet actually reached this hypothetical point, we have no certainty as to what will happen. But in this series of essays, I will examine some of the possible futures that I see.

 

 

 

 

 

 

 

Of course, I have substituted “Turing” here for “Einstein.” While Einstein profoundly altered our view of the physical universe, Turing profoundly changed our concepts of computing. Arguably, he also did a lot to win World War II for the allies and prevent possible world domination by Nazis. He did this by designing a code breaking machine. To reward his service, police arrested Turing, subjected him to hormone treatments to “cure” his homosexuality and ultimately hounded him literally to death. Some of these events are illustrated in the recent (though somewhat fictionalized) movie, “The Imitation Game.”

Turing is also famous for the so-called “Turing Test.” Can machines be called “intelligent?” What does this mean? Rather than argue from first principles, Turing suggested operationalizing the question in the following way:

A person communicates with something by teletype. That something could be another human being or it could be a computer. If the person cannot determine whether or not he is communicating with a computer or a human being, then, according to the “Turing Test” we would have to say that machine is intelligent.

Despite great respect for Turing, I have always had numerous issues with this test. First, suppose the human being was able to easily tell that they were communicating with a computer because the computer knew more, answered more accurately and more quickly than any person could possibly do. (Think Watson and Jeopardy). Does this mean the machine is not intelligent? Would it not make more sense to say it was more intelligent? 

 

 

 

 

 

 

 

 

Second, people are good at many things, but discriminating between “intelligent agents” and randomness is not one of them. Ancient people as well as many modern people ascribe intelligent agency to many things like earthquakes, weather, natural disasters plagues, etc. These are claimed to be signs that God (or the gods) are angry, jealous, warning us, etc. ?? So, personally, I would not put much faith in the general populous being able to make this discrimination accurately.

 

 

 

 

 

 

 

 

 

 

 

Third, why the restriction of using a teletype? Presumably, this is so the human cannot “cheat” and actually see whether they are communicating with a human or a machine. But is this really a reasonable restriction? Suppose I were asked to discriminate whether I were communicating with a potato or a four iron via teletype. I probably couldn’t. Does this imply that we would have to conclude that a four iron has achieved “artificial potatoeness”? The restriction to a teletype only makes sense if we prejudge the issue as to what intelligence is. If we define intelligence purely in terms of the ability to manipulate symbols, then this restriction might make some sense. But is that the sum total of intelligence? Much of what human beings do to survive and thrive does not necessarily require symbols, at least not in any way that can be teletyped. People can do amazing things in the arenas of sports, art, music, dance, etc. without using symbols. After the fact, people can describe some aspects of these activities with symbols.But that does not mean that they are primarily symbolic activities. In terms of the number of neurons and the connectivity of neurons, the human cerebellum (which controls the coordination of movement) is more complex that the cerebrum (part of which deals with symbols).

 

 

 

 

 

 

 

 

 

 

Photo by Tanhauser Vu00e1zquez R. on Pexels.com

Fourth, adequately modeling or simulating something does not mean that the model and the thing are the same. If one were to model the spread of a plague, that could be a very useful model. But no-one would claim that the model was a plague. Similarly, a model of the formation and movement of a tornado could prove useful. But again, even if the model were extremely good, no-one would claim that the model constituted a tornado! Yet, when it comes to artificial intelligence, people seem to believe that if they have a good model of intelligence, they have achieved intelligence.

 

When humans “think” things, there is most often an emotional and subjective component. While we are not conscious of every process that our brain engages in, there is nonetheless, consciousness present during our thinking. This consciousness seems to be a critical part of what it means to have human intelligence. Regardless of what one thinks of the “Turing Test”, per se, there can be no doubt that machines are able to act more accurately and in more domains than they could just a few years ago. Progress in the practical use of machines does not seem to have hit any kind of “wall.”

In the following blog posts, we began exploring some possible scenarios around the concept of “The Singularity.” Like most science fiction, the goal is to explore the ethics and the implications and not to “argue” what will or will not happen. 

 

 

 

 

 

 

 

 

 

 


Turing’s Nightmares is available in paperback and ebook on Amazon. Here is my author page.

A more recent post on AI

One issue with human intelligence is that we often use it to rationalize what we find emotionally appealing though we believe we are using our intelligence to decide. I explore this concept in this post.

 

This post explores how humans use their intelligence to rationalize.

This post shows how one may become addicted to self-destructive lies. A person addicted to heroin, for instance, is also addicted to lies about that addiction. 

This post shows how we may become conned into doing things against our own self-interests. 

 

This post questions whether there are more insidious motives behind the current use of AI beyond making things better for humanity. 

← Older posts

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 664 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...