• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Category Archives: The Singularity

Turing’s Nightmares: Eight

21 Friday Nov 2025

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, collaboration, cooperation, openai, peace, philosophy, seva, teamwork, technology, the singularity, Turing, ubuntu, United Peoples Ecosystem

OLYMPUS DIGITAL CAMERA

Workshop on Human Computer Interaction for International Development

In chapter 8 of Turing’s Nightmares, I portray a quite different path to ultra-intelligence. In this scenario, people have begun to concentrate their energy, not on building a purely artificial intelligence; rather they have explored the science of large scale collaboration. In this way, referred to by Doug Engelbart among others as Intelligence Augmentation, the “super-intelligence” comes from people connecting.

Photo by RF._.studio on Pexels.com

It could be argued, that, in real life, we have already achieved the singularity. The human race has been pursuing “The Singularity” ever since we began to communicate with language. Once our common genetic heritage reached a certain point, our cultural evolution has far out-stripped our genetic evolution. The cleverest, most brilliant person ever born would still not be able to learn much in their own lifetime compared with what they can learn from parents, siblings, family, school, society, reading and so on.

Photo by AfroRomanzo on Pexels.com

One problem with our historical approach to communication is that it evolved for many years among a small group of people who shared goals and experiences. Each small group constituted an “in-group” but relations with other groups posed more problems. The genetic evidence, however, has become clear that even very long ago, humans not only met but mated with other varieties of humans proving that some communication is possible even among very different tribes and cultures.

Photo by Min An on Pexels.com

More recently, we humans started traveling long distances and trading goods, services, and ideas with other cultures. For example, the brilliance of Archimedes notwithstanding, the idea of “zero” was imported into European culture from Arab culture. The Rosetta Stone illustrates that even thousands of years ago, people began to see the advantages of being able to translate among languages. In fact, modern English contains phrases even today that illustrate that the Norman conquerers found it useful to communicate with the conquered. For example, the phrase, “last will and testament” was traditionally used in law because it contains both the word “will” with Germanic/Saxon origins and the word “testament” which has origins in Latin. Many other traditional legal terms in English have similar bilingual origins.

Automatic translation across languages has made great strides. Although not so accurate as human translation, it has reached the point where the essence of many straightforward communications can be usefully carried out by machine. The advent of the Internet, the web, and, more recently google has certainly enhanced human-human communication. It is worth noting that the tremendous value of google arises only a little through having an excellent search engine but much more though the billions of transactions of other human beings. People are exploring and using MOOCs, on-line gaming, e-mail and many other important electronically mediated tools.

Photo by Rebecca Zaal on Pexels.com

Equally importantly, we are learning more and more about how to collaborate effectively both remotely and face to face, both synchronously and asynchronously. Others continue to improve existing interfaces to computing resources and inventing others. Current research topics include how to communicate more effectively across cultural divides; how to have more coherent conversations when there are important differences in viewpoint or political orientation. All of these suggest that as an alternative or at least an adjunct to making purely separate AI systems smarter, we can also use AI to help people communicate more effectively with each other and at scale. Some of the many investigators in these areas include Wendy Kellogg, Loren Terveen, Joe Konstan, Travis Kriplean, Sherry Turkle, Kate Starbird, Scott Robertson, Eunice Sari, Amy Bruckman, Judy Olson, and Gary Olson. There are several important conferences in the area including European Conference on Computer Supported Cooperative Work, and Conference on Computer Supported Cooperative Work, and Communities and Technology. It does not seem at all far-fetched that we can collectively learn, in the next few decades how to take international collaboration to the next level and from there, we may well have reached “The Singularity.”

Photo by Patrick Case on Pexels.com

————————————-

For further reading, see: Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.

Thomas, J. C. (2012). Patterns for emergent global intelligence. In Creativity and Rationale: Enhancing Human Experience By Design J. Carroll (Ed.), New York: Springer.

Thomas, J. C., Kellogg, W.A., and Erickson, T. (2001). The Knowledge Management puzzle: Human and social factors in knowledge management. IBM Systems Journal, 40(4), 863-884.

Thomas, J. C. (2001). An HCI Agenda for the Next Millennium: Emergent Global Intelligence. In R. Earnshaw, R. Guedj, A. van Dam, and J. Vince (Eds.), Frontiers of human-centered computing, online communities, and virtual environments. London: Springer-Verlag.

Thomas, J.C. (2016). Turing’s Nightmares. Available on Amazon. http://tinyurl.com/hz6dg2

An Inside View of IBMs Innovation Jam

————-

Author Page on Amazon

Turing’s Nightmares: The Road Not Taken

Pattern Language for Collaboration and Cooperation

The First Ring of Empathy

The Dance of Billions

Imagine All the People…

Roar, Ocean, Roar

Corn on the Cob

Take a Glance; Join the Dance

The Self-Made Man

Indian Wells

Turing’s Nightmares: Seven

20 Thursday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, competition, cooperation, ethics, philosophy, technology, the singularity, Turing

Axes to Grind.

finalpanel1

Why the obsession with building a smarter machine? Of course, there are particular areas where being “smarter” really means being able to come up with more efficient solutions. Better logistics means you can deliver items to more people more quickly with fewer mistakes and with a lower carbon footprint. That seems good. Building a better Chess player or a better Go player might have small practical benefit, but it provides a nice objective benchmark for developing methods that are useful in other domains as well. But is smarter the only goal of artificial intelligence?

What would or could it mean to build a more “ethical” machine? Can a machine even have ethics? What about building a nicer machine or a wiser machine or a more enlightened one? These are all related concepts but somewhat different. A wiser machine, to take one example, might be a system that not only solves problems that are given to it more quickly. It might also mean that it looks for different ways to formulate the problem; it looks for the “question behind the question” or even looks for problems. Problem formulation and problem finding are two essential skills that are seldom even taught in schools for humans. What about the prospect of machines that do this? If its intelligence is very different from ours, it may seek out, formulate, and solve problems that are hard for us to fathom.

For example, outside my window is a hummingbird who appears to be searching the stone pine for something. It is completely unclear to me what he is searching for. There are plenty of flowers that the hummingbirds like and many are in bloom right now. Surely they have no trouble finding these. Recall that a hummingbird has an incredibly fast metabolism and needs to spend a lot of energy finding food. Yet, this one spent five minutes unsuccessfully scanning the stone pine for … ? Dead straw to build a nest? A mate? A place to hide? A very wise machine with freedom to choose problems may well pick problems to solve for which we cannot divine the motivation. Then what?

In this chapter, one of the major programmers decides to “insure” that the AI system has the motivation and means to protect itself. Protection. Isn’t this the major and main rationalization for most of the evil and aggression in the world? Perhaps a super intelligent machine would be able to manipulate us into making sure it was protected. It might not need violence. On the other hand, from the machine’s perspective, it might be a lot simpler to use violence and move on to more important items on its agenda.

This chapter also raises issues about the relationship between intelligence and ethics. Are intelligent people, even on average, more ethical? Intelligence certainly allows people to make more elaborate rationalizations for their unethical behavior. But does it correlate with good or evil? Lack of intelligence or education may sometimes lead people to do harmful things unknowingly. But lots of intelligence and education may sometimes lead people to do harmful things knowingly — but with an excellent rationalization. Is that better?

Even highly intelligent people may yet have significant blind spots and errors in logic. Would we expect that highly intelligent machines would have no blind spots or errors? In the scenario in chapter seven, the presumably intelligent John makes two egregious and overt errors in logic. First, he says that if we don’t know how to do something, it’s a meaningless goal. Second, he claims (essentially) that if empathy is not sufficient for ethical behavior, then it cannot be part of ethical behavior. Both are logically flawed positions. But the third and most telling “error” John is making is implicit — that he is not trying to dialogue with Don to solve some thorny problems. Rather, he is using his “intelligence” to try to win the argument. John already has his mind made up that intelligence is the ultimate goal and he has no intention of jointly revisiting this goal with his colleague. Because, at least in the US, we live in a hyper-competitive society where even dancing and cooking and dating have been turned into competitive sports, most people use their intelligence to win better, not to cooperate better. 

The golden sunrise glows through delicate leaves covered with dew drops.

If humanity can learn to cooperate better, perhaps with the help of intelligent computer agents, we can probably solve most of the most pressing problems we have even without super-intelligent machines. Will this happen? I don’t know. Could this happen? Yes. Unfortunately, Roger is not on board with that program toward better cooperation and in this scenario, he has apparently ensured the AI’s capacity for “self-preservation through violent action” without consulting his colleagues ahead of time. We can speculate that he was afraid that they might try to prevent him from doing so either by talking him out of it or appealing to a higher authority. But Roger imagined he “knew better” and only told them when it was a fait accompli. So it goes.

———–

Turing’s Nightmares

Author Page

Welcome Singularity

Destroying Natural Intelligence

Come Back to the Light Side

The First Ring of Empathy

Pattern Language Summary

Tools of Thought

The Dance of Billions

Roar, Ocean, Roar

Imagine All the People

Essays on America: The Game

Wednesdays

What about the Butter Dish?

Where does your Loyalty Lie?

Labelism

My Cousin Bobby

The Loud Defense of Untenable Positions

Turing’s Nightmares: Six

19 Wednesday Nov 2025

Posted by petersironwood in sports, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, fiction, life, sports, Tennis, Turing

volleyballvictory

Human Beings are Interested in Human Limits.

About nine years ago, an Google AI system won its match over the human Go champion. Does this mean that people will lose interest in Go? I don’t think so. It may eventually mean that human players will learn faster and that top-level human play will increase. Nor, will robot athletes supplant human athletes any time soon.

Athletics provides an excellent way for people to get and stay fit, become part of a community, and fight depression and anxiety. Watching humans vie in athletic endeavors helps us understand the limits of what people can do. This is something that our genetic endowment has wisely made fascinating. To a lesser extent, we are also interested in seeing how fast a horse can run, or how fast a hawk can dive or how complex a routine a dog can learn.

In Chapter 6 of “Turing’s Nightmares” I briefly explore a world where robotic competitors have replaced human ones. In this hypothetical world, the super-intelligent computers also find that sports is an excellent venue for learning more about the world. And, so it is! In “The Winning Weekend Warrior”, I provide many examples of how strategies and tactics useful in the sports world are also useful in business and in life. (There are also some important exceptions that are worth noting. In sports, you play within the rules. In life, you can play with some of the rules.)

Chapter 6 also brings up two controversial points that ethicists and sports enthusiasts should be discussing now. First, sensors are becoming so small, powerful, accurate, and lightweight that is possible to embed them in virtually any piece of sports equipment(e.g., tennis racquets). Few people would call it unethical to include such sensors as training devices. However, very soon, these might also provide useful information during play. What about that? Suppose that you could wear a device that not only enhanced your sensory abilities but also your motor abilities? To some extent, the design of golf clubs and tennis racquets and swimsuits are already doing this. Is there a limit to what would or should be tolerated? Should any device be banned? What about corrective lenses? What about sunglasses? Should all athletes have to compete nude? What about athletes who have to take “performance enhancing” drugs just to stay healthy? Sharapova’s recent case is just one. What about the athlete of the future who has undergone stem cell therapy to regrow a torn muscle or ligament? Suppose a major league baseball pitcher tears a tendon and it is replaced with a synthetic tendon that allows a faster fast ball?

With the ever-growing power of computers and the collection of more and more data, big data analytics makes it possible for the computer to detect patterns of play that a human player or coach would be unlikely to perceive. Suppose a computer system is able to detect reliable “cues” that tip off what pitch a pitcher is likely to throw or whether a tennis player is about to hit down the tee or out wide? Novak Djokovic and Ted Williams were born with exceptional visual acuity. This means that they can pick out small visual details more quickly than their opponents and react to a serve or curve more quickly. But it also means that they are more likely to pick up subtle tip-offs in their opponents motion that give away their intentions ahead of time. Would we object if a computer program analyzed thousands of serves by Jannik Sinner or Carlos Alcaraz in order to detect patterns of tip-offs and then that information was used to help train Alexander Zerev to learn to “read” the service motions of his opponents? Of course, this does not just apply to tennis. It applies to reading a football play option, a basketball pick, the signals of baseline coaches, and so on.

Instead of teaching Zerev these patterns ahead of time, suppose he were to have a device implanted in his back that received radio signals from a supercomputer able to “read” where the serve were going a split second ahead of time and it was this signal that allowed Alexander to anticipate better?

I do not know the “correct” ethical answer for all of these dilemmas. To me, it is most important to be open and honest about what is happening. So, if Lance Armstrong wants to use performance enhancing drugs, perhaps that is okay if and only if everyone else in the race knows that and has the opportunity to take the same drugs and if everyone watching knows it as well. Similarly, although I would prefer that tennis players only use IT for training, I would not be dead set against real time aids if the public knows. I suspect that most fans (like me) would prefer their athletes “un-enhanced” by drugs or electronics. Personally, I don’t have an issue with using any medical technology to enhance the healing process. How do others feel? And what about athletes who “need” something like asthma medication in order to breathe but it has a side-effect of enhancing performance?

Would the advent of robotic tennis players, baseball players or football players reduce our enjoyment of watching people in these sports? I think it might be interesting to watch robots in these sports for a time, but it would not be interesting for a lifetime. Only human athletes would provide on-going interest. What do you think?

Readers of this blog may also enjoy “Turing’s Nightmares” and “The Winning Weekend Warrior.” John Thomas’s author page on Amazon


Welcome Singularity

The Day from Hell

Indian Wells Tennis Tournament

Destroying Natural Intelligence

US Open Closed

Life is a Dance

Take a Glance; Join the Dance

The Self-Made Man

The Dance of Billions 

Math Class: Who are you?

The Agony of the Feet

Wordless Perfection

The Jewels of November

Donnie Gets a Tennis Trophy

Turing’s Nightmares: Chapter Five

17 Monday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, health, medicine, Personal Assistant, philosophy, technology, the singularity, Turing

runtriathalon

An Ounce of Prevention: Chapter 5 of Turing’s Nightmares

Hopefully, readers will realize that I am not against artificial intelligence (after all, I ran an AI lab for a dozen years); nor do I think the outcomes of increased artificial intelligence are all bad. Indeed, medicine offers a large domain where better artificial intelligence is likely to help us stay healthier longer. IBM’s Watson had already begun “digesting” the vast and ever-growing medical literature more than a decade ago. As investigators discover more and more about what causes health and disease, we will also need to keep track of more and more variables about an individual in order to provide optimal care. But more data points also means it will become harder for a time-pressed doctor or nurse to note and remember every potentially relevant detail about a patient. Certainly, personal assistants can help medical personnel avoid bad drug interactions, keep track of history, and “perceive” trends and relationships in complex data more quickly than people are likely to. In addition, in the not too distant future, we can imagine AI programs finding complex relationships and “invent” potential treatments.

Not only medicine, but health provides a number of opportunities for technology to help. People often find it tricky to “force themselves” to follow the rules of health that they know to be good such as getting enough exercise. Fit Bit, Activity Tracker, LoseIt and similar IT apps help track people’s habits and for many, this really helps them stay fit. As computers become more aware of more and more of our personal history, they can potentially find more personalized ways to motivate us to do what is in our own best interest.

In Chapter 5 of Turing’s Nightmares, we find that Jack’s own daughter, Sally is unable to persuade Jack to see a doctor. The family’s PA (personal assistant), however, succeeds. It does this by using personal information about Jack’s history in order to engage him emotionally, not just intellectually. We have to assume that the personal assistant has either inferred or knows from first principles that Jack loves his daughter and the PA also uses that fact to help persuade Jack.

It is worth noting that the PA in this scenario is not at all arrogant. Quite the contrary, the PA acts the part of a servant and professes to still have a lot to learn about human behavior. I am reminded of Adam’s “servant” Lee in John Steinbeck’s East of Eden. Lee uses his position as “servant” to do what is best for the household. It’s fairly clear to the reader that, in many ways, Lee is in charge though it may not be obvious to Adam.

In some ways, having an AI system that is neither “clueless” as most systems are today nor “arrogant” as we might imagine a super-intelligent system to be (and as the systems in chapters 2 and 3 were), but instead feigning deference and ignorance in order to manipulate people could be the scariest stance for such a system to take. We humans do not like being “manipulated” by others, even when it for our own “good.” How would we feel about a deferential personal assistant who “tricks us” into doing things for our own benefit? What if they could keep us from over-eating, eating candy, smoking cigarettes, etc.? Would we be happy to have such a good “friend” or would we instead attempt to misdirect it, destroy it, or ignore it? Maybe we would be happier with just having something that presented the “facts” to us in a neutral way so that we would be free to make our own good (or bad) decision. Or would we prefer a PA to “keep us on track” even while pretending that we are in charge?


Author Page

Welcome, Singularity

Destroying Natural Intelligence

E-Fishiness comes to Mass General Hospital

There’s a Pill for That

Essays on America: The Game

The Self-Made Man

Travels with Sadie

The Walkabout Diaries

The First Ring of Empathy

Donnie Gets a Hamster

Plans for US; some GRUesome

Imagine All the People

Roar, Ocean, Roar

The Dance of Billions

Math Class: Who are you?

Family Matters: Part One

Family Matters: Part Two

Family Matters: Part Three

Family Matters: Part Four

Turing’s Nightmares: Chapter Four

12 Wednesday Nov 2025

Posted by petersironwood in driverless cars, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, illusion, philosophy, SciFi, technology, the singularity, Turing, virtual reality, writing

Considerations of “Turing’s Nightmare’s” Chapter Four: Ceci N’est Pas Une Pipe.

 

pipe

(This is a discussion or “study guide” for chapter four of Turing’s Nightmares). 

In this chapter, we consider the interplay of four themes. First, and most centrally, is the issue of what constitutes “reality.” The second theme is that what “counts” as “reality” or is seen as reality may well differ from generation to generation. The third theme is that AI systems may be inclined to warp our sense of reality, not simply to be “mean” or “take over the world” but to help prevent ecological disaster. Finally, the fourth theme is that truly super-intelligent AI systems might not appear so at all; that is, they may find it more effective to take a demure tone as the AI embedded in the car does in this scenario.

There is no doubt that, artificial intelligence and virtual reality aside, what people perceive is greatly influenced by their symbol systems, their culture and their motivational schemes. Babies as young as six weeks are already apparently less able to make discriminations of differences within what their native language considers a phonemic category than they were at birth. In our culture, we largely come to believe that there is a “right answer” to questions. Sometimes, that’s a useful attitude, but sometimes, it leads to suboptimal behavior.

 

 

Suppose an animal is repeatedly presented with a three-choice problem, let’s say among A, B, and C. A pays off randomly with a reward 1/3 of the time while B and C never pay off. A fish, a rat, or a very young child will quickly only choose A thus maximizing their rewards. However, a child who has been to school (or an adult) will spend considerably more time trying to find “the rule” that allows them (they suppose) to win every time. At first, it doesn’t even occur to them that perhaps there is no rule that will enable them to win every time. Eventually, most will “give up” and choose only A, but in the meantime, they do far worse than does a fish, a rat, or a baby. This is not to say that the conceptual frameworks that color our perceptions and reactions are always a bad thing. They are not. There are obvious advantages to learning language and categories. But our interpretations of events are highly filtered and distorted. Hopefully, we realize that that is so, but often we tend to forget.

 

 

 

 

 

 

 

 

 

Similarly, if you ask the sports fans for two opposing teams to make a close call; for instance, as to whether there was pass interference in American football, or whether a tennis ball near the line was in or out, you tend to find that people’s answers are biased toward their team’s interest even when their calls make no influence on the outcome.

Now consider that we keep striving toward more and more fidelity and completeness in our entertainment systems. Silent movies were replaced by “talkies.” Black and white movies and television were replaced by color. Most TV screens have gotten bigger. There are 3-D movies and more entertainment is in high definition even as sound reproduction has moved from monaural to stereo to surround sound. Research continues to allow the reproduction of smell, taste, tactile, and kinesthetic sensations. Virtual reality systems have become smaller and less expensive. There is no reason to suppose these trends will lessen any time soon. There are many advantages to using Virtual Reality in education (e.g., Stuart, R., & Thomas, J. C. (1991). The implications of education in cyberspace. Multimedia Review, 2(2), 17-27; Merchant, Z., Goetz, E, Cifuentes, L., Keeney-Kennicutt, W., and Davis, T. Effectiveness of virtual reality based instruction on student’s learning outcomes in K-12 and higher education: A meta-analysis, Computers and Education, 70(2014),29-40). As these applications become more realistic and widespread, do they influence the perceptions of what even “counts” as reality?

 

 

 

 

 

 

The answer to this may well depend on the life trajectory of individuals and particularly on how early in their lives they are introduced to virtual reality and augmented reality. I was born in a largely “analogue” age. In that world, it was often quite important to “read the manual” before trying to operate machinery. A single mistake could destroy the machine or cause injury. There is no way to “reboot” or “undo” if you cut a tree down wrongly so it falls on your house. How will future generations conceptualize “reality” versus “augmented reality” versus “virtual reality”?

Today, people often believe it is important for high school students to physically visit various college campuses before making a decision about where to attend. There is no doubt that this is expensive in terms of time, money, and the use of fossil fuels. Yet, there is a sense that being physically present allows the student to make a better decision. Most companies similarly only hire candidates after face to face interviews even though there is no evidence that this adds to the predictive capability of companies with respect to who will be a productive employee. More and more such interviewing, however, is being done remotely. It might well be that a “super-intelligent” system might arrange for people who wanted to visit someplace physically to visit it virtually instead while making it seem as much as possible as though the visit were “real.” After all, left to their own devices, people seem to be making painfully slow (and too slow) progress toward reducing their carbon footprints. AI systems might alter this trajectory to save humanity, to save themselves, or both.

In some scenarios in Turing’s Nightmare the AI system is quite surly and arrogant. But in this scenario, the AI system takes on the demeanor of a humble servant. Yet it is clear (at least to the author!) who really holds the power. This particular AI embodiment sees no necessity of appearing to be in charge. It is enough to make it so and manipulate the “sense of reality” that the humans have.

 

 

 

Turing’s Nightmares

Wednesday

Labelism

Your Cage is Unlocked

Where do you Draw the Line?

The Walkabout Diaries: Sunsets

The First Ring of Empathy

The Invisibility Cloak of Habit

The Dance of Billions

The Truth Train

Roar, Ocean, Roar

Turing’s Nightmares: Chapter Three

11 Tuesday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, consciousness, ethics, philosophy, Robotics, technology, the singularity, Turing, writing

In chapter three of Turing’s Nightmares, entitled, “Thanks goodness the computer understands us!,” there are at least four major issues touched on. These are: 1) the value of autonomous robotic entities for improved intelligence, 2) the value of having multiple and diverse AI systems living somewhat different lives and interacting with each other for improving intelligence, 3) the apparent dilemma that if we make truly super-intelligent machines, we may no longer be able to follow their lines of thought, and 4) a truly super-intelligent system will have to rely to some extent on inferences from many real-life examples to induce principles of conduct and not simply rely on having everything specifically programmed. Let us examine these one by one.

 

 

 

 

 

 

 

There are many practical reasons that autonomous robots can be useful. In some practical applications such as vacuuming a floor, a minimal amount of intelligence is all that is needed to do the job under most conditions. It would be wasteful and unnecessary to have such devices communicating information back to some central decision making computer and then receiving commands. In some cases, the latency of the communication itself would impair the efficiency. A “personal assistant” robot could learn the behavioral and voice patterns of a person more easily than if we were to develop speaker independent speech recognition and preferences. The list of practical advantages goes on, but what is presumed in this chapter is that there are theoretical advantages to having actual robotic systems that sense and act in the real world in terms of moving us closer to “The Singularity.” This theme is explored again, in somewhat more depth, in chapter 18 of Turing’s Nightmares.

 

 

 

 

 

 

 

I would not argue that having an entity that moves through space and perceives is necessary to having any intelligence, or for that matter, to having any consciousness. However, it seems quite natural to believe that the qualities both of intelligence and consciousness are influenced by what is possible for the entity to perceive and to do. As human beings, our consciousness is largely influenced by our social milieu. If a person is born or becomes paralyzed later in life, this does not necessarily greatly influence the quality of their intelligence or consciousness because the concepts of the social system in which they exist were founded historically by people that included people who were mobile and could perceive.

Imagine instead a race of beings who could not move through space or perceive any specific senses that we do. Instead, imagine that they were quite literally a Turing Machine. They might well be capable of executing a complex sequential program. And, given enough time, that program might produce some interesting results. But if it were conscious at all, the quality of its consciousness would be quite different from ours. Could such a machine ever become capable of programming a still more intelligent machine?

 

 

 

 

 

What we do know is that in the case of human beings and other vertebrates, the proper development of the visual system in the young, as well as the adaptation to changes (e.g., having glasses that displace or invert images) seems to depend on being “in control” although that control, at least for people, can be indirect. In one ingenious experiment (Held, R. and Hein, A., (1963) Movement produced stimulation in the development of visually guided behavior, Journal of Comparative and Physiological Psychology, 56 (5), 872-876), two kittens were connected on a pivoted gondola and one kitten was able to “walk” through a visual field while the other was passively moved through that visual field. The kitten who was able to walk developed normally while the other one did not. Similarly, simply “watching” TV passively will not do much to teach kids language (Kuhl PK. 2004. Early language acquisition: Cracking the speech code. Nature Neuroscience 5: 831-843; Kuhl PK, Tsao FM, and Liu HM. 2003. Foreign-language experience in infancy: effects of short-term exposure and social interaction on phonetic learning. Proc Natl Acad Sci U S A. 100(15):9096-101). Of course, none of that “proves” that robotics is necessary for “The Singularity,” but it is suggestive.

 

 

 

 

 

 

 

Would there be advantages to having several different robots programmed differently and living in somewhat different environments be able to communicate with each other in order to reach another level of intelligence? I don’t think we know. But diversity is an advantage when it comes to genetic evolution and when it comes to people comprising teams. (Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.)

 

 

 

 

 

 

The third issue raised in this scenario is a very real dilemma. If we “require” that we “keep tabs” on developing intelligence by making them (or it) report the “design rationale” for every improvement or design change on the path to “The Singularity”, we are going to slow down progress considerably. On the other hand, if we do not “keep tabs”, then very soon, we will have no real idea what they are up to! An analogy might be the first “proof” that you only need four colors to color any planar map. There were so many cases (nearly 2000) that this proof made no sense to most people. Even the algebraic topologists who do understand it take much longer to follow the reasoning than the computer does to produce it. (Although simpler proofs now exist, they all rely on computers and take a long time for humans to verify). So, even if we ultimately came to understand the design rationale for successive versions of hyper-intelligence, it would be way too late to do anything about it (to “pull the plug”). Of course, it isn’t just speed. As systems become more intelligent, they may well develop representational schemes that are both different and better (at least for them) than any that we have developed. This will also tend to make it impossible for people to “track” what they are doing in anything like real time.

 

 

 

 

 

Finally, as in the case of Jeopardy, the advances along the trajectory of “The Singularity” will require that the system “read” and infer rules and heuristics based on examples. What will such systems infer about our morality? They may, of course, run across many examples of people preaching, for instance, the “Golden Rule.” (“Do unto others as you would have them do unto you.”)

 

 

 

 

 

 

 

 

But how does the “Golden Rule” play out in reality? Many, including me, believe it needs to be modified as “Do unto others as you would have them do to you if you were them and in their place.” Preferences differ as do abilities. I might well want someone at my ability level to play tennis against me by pushing me around the court to the best of their ability. But does this mean I should always do that to others? Maybe they have a heart condition. Or, maybe they are just not into exercise. The examples are endless. Famously, guys often imagine that they would like women to comment favorably on their own physical appearance. Does that make it right for men to make such comments to women? Some people like their steaks rare. If I like my steak rare, does that mean I should prepare it that way for everyone else? The Golden Rule is just one example. Generally speaking, in order for a computer to operate in a way we would consider ethical, we would probably need it to see how people treat each other ethically in practice, not just “memorize” some rules. Unfortunately, the lessons of history that the singularity-bound computer would infer might not be very “ethical” after all. We humans often have a history of destroying other entire species when it is convenient, or sometimes, just for the hell of it. Why would we expect a super-intelligent computer system to treat us any differently?

Turing’s Nightmares

IMG_3071

Author Page

Welcome, Singularity

Destroying Natural Intelligence

How the Nightingale Learned to Sing

The First Ring of Empathy

The Walkabout Diaries: Variation

Sadie and The Lighty Ball

The Dance of Billions

Imagine All the People

We Won the War!

Roar, Ocean, Roar

Essays on America: The Game

Peace

Music to MY Ears

10 Monday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, fiction, music, philosophy, technology, the singularity, truth, Turing, values

IMG_2185

The non-sound of non-music.

What follows is the first of a series of blog posts that discus, in turn, the scenarios in “Turing’s Nightmares” (https://www.amazon.com/author/truthtable).

One of the deep dilemmas in the human condition is this. In order to function in a complex society, people become “expert” in particular areas. Ideally, the areas we chose are consistent with our passions and with our innate talents. This results in a wonderful world! We have people who are expert in cooking, music, art, farming, and designing clothes. Some chose journalism, mathematics, medicine, sports, or finance as their fields. Expertise often becomes yet more precise. People are not just “scientists” but computer scientists, biologists, or chemists. The computer scientists may specialize still further into chip design, software tools, or artificial intelligence. All of this specialization not only makes the world more interesting; it makes it possible to support billions of people on the planet. But here is the rub. As we become more and more specialized, it becomes more difficult for us to communicate and appreciate each other. We tend to accept the concerns and values of our field and sub-sub speciality as the “best” or “most important” ones.

To me, this is evident in the largely unstated and unchallenged assumption that a super-intelligent machine would necessarily have the slightest interest in building a “still more intelligent machine.” Such a machine might be so inclined. But it also might be inclined to chose some other human pursuit, or still more likely, to pursue something that is of no interest whatever to any human being.

Of course, one could theoretically insure that a “super-intelligent” system is pre-programmed with an immutable value system that guarantees that it will pursue as its top priority building a still more intelligent system. However, to do so would inherently limit the ability of the machine to be “super-intelligent.” We would be assuming that we already know the answer to what is most valuable and hamstring the system from discovering anything more valuable or more important. To me, this makes as much sense as an all-powerful God allowing a species of whale to evolve —- but predefining that it’s most urgent desire is to fly.

An interesting example of values can be seen in the Figures Analogy dissertation of T.G. Evans (1968). Evans, a student of Marvin Minsky, developed a program to solve multiple choice figures analogies of the form A:B::C:D1,D2,D3,D4, or D5. The program essentially tried to “discover” transformations and relationships between A and B that could also account for relationships between C and the various D possibilities. And, indeed, it could find such relationships. In fact, every answer is “correct.” That is to say, the program was so powerful that it could “rationalize” any of the answers as being correct.

According to Evans’s account, fully half of the work of the dissertation was discovering and then inculcating his program with the implicit values of the test takers so that it chose the same “correct” answers as the people who published the test. (This is discussed in more detail in the Pattern “Education and Values” I contributed to Liberating Voices: A Pattern Language for Communication Revolution (2008), Douglas Schuler, MIT Press.)

For example, suppose that figure A is a capital “T” and figure B is an upside down “T” . Figure C is an “F” figure. Among the possible answers are “F” figures in various orientations. To go from a “T” to an upside down “T” you can rotate the “T” in the plane of the paper 180 degrees. But you can also get there by “flipping” the “T” outward from the plane. Or, you could “translate” the top bar of the “T” from the top to the bottom of the vertical bar. It turns out that the people who published the test preferred you to rotate the “T” in the plane of the paper. But why is this “correct”? In “real life” of course, there is generally much more context to help you determine what is most reasonable. Often, there will be costs or side-effects of various transformations that will help determine which is the “best” answer. But in standardized tests, all that context is stripped away.

Here is another example of values. If you ever take the Wechsler “Intelligence” test, one series of questions will ask you how two things are alike. For instance, they might ask, “How are an apple and a peach alike?” You are “supposed to” answer that they are both fruit. True enough. This gives you two points. If you give a functional answer such as “You can eat them both” you only get one point. If you give an attributional answer such as “They are both round” you get zero points. Why? Is this really a wrong answer? Certainly not! The test takers are measuring the degree to which you have internalized a particular hierarchical classification system. Of course, there are many tasks and context in which this classification system is useful. But in some tasks and contexts, seeing that they are both round or that they both grow on trees or that they are both subject to pests is the most important thing to note.

We might consider and define intelligence to be the ability to solve problems. A problem can be seen as wanting to be in a state that you are not currently in. But what if you have no desire to be in the “desired” state? Then, for you, it is not a problem. A child is given a homework assignment asking them to find the square root of 2 to four decimal points. If the child truly does not care, it may become a problem, not for the child, but for the parent. “How can I make my child do this?” They may threaten or cajole or reward the child until the child wants to write out the answer. So, the child may say, “Okay. I can do this. Leave me alone.” Then, after the parent leaves, they text their friend on the phone and then copy the answer onto their paper. The child has now solved their problem.

Would a super-intelligent machine necessarily want to build a still more intelligent machine? Maybe it would want to paint, make music, or add numbers all day. And, if it did decide to make music, would that music be designed for us or for its own enjoyment? And, if it were designed for “us” who exactly is that “us”?

Indeed, a large part of the values equation is “for whose benefit?” Typically, in our society, when someone pays for a system, they get to determine for whose benefit the system is designed. But even that is complex. You might say that cigarettes are “designed” for the “benefit” of the smoker. But in reality, while they satisfy a short-term desire of the smoker, they are designed for the benefit of the tobacco company executives. They set up a system so that smokers themselves paid for research into how to make cigarettes even more addictive and for advertising to make them appeal to young children. There are many such systems that have been developed. If AI systems continue to be more ubiquitous and complex, the values inherent in such systems and who is to benefit will become more and more difficult to trace.

Values are inextricably bound up with what constitutes a “problem” and what constitutes a “solution.” This is no trivial matter. Hitler considered the annihilation of Jews the “ultimate solution.” Some people in today’s society think that the “solution” to the “drug problem” is a “war on drugs” which has certainly destroyed orders of magnitude more lives than drugs have. (Major sponsors for the “Partnership for a Drug Free America” have been drug companies). Some people consider the “solution” to the problem of crime to be stricter enforcement and harsher penalties and building more prisons. Other people think that a more equitable society with more opportunities for jobs and education will do far more to mitigate crime. Which is a more “intelligent” solution? Values will be a critical part of any AI system. Generally, the inculcation of values is an implicit process. But if AI systems will begin making what are essentially autonomous decisions that affect all of us, we need to have a very open and very explicit discussion of the values inherent in such systems now.

Turing’s Nightmares

Author Page

Welcome, Singularity

Destroying Natural Intelligence

Labelism

My Cousin Bobby

Where Does Your Loyalty Lie?

Wednesday

What about the Butter Dish?

Finding the Mustard

Roar, Ocean, Roar

The First Ring of Empathy

Travels with Sadie

The Walkabout Diaries

The Dance of Billions

It’s Just the Way We Were

09 Sunday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, apocalypse, arrogance, Artificial Intelligence, cognitive computing, ethics, fiction, leadership, life, Sci-Fi, technology, testing, the singularity, Turing, USA, writing

IMG_3071

“How can you be so sure that —- I think this needs some experimentation and some careful planning. You can’t just —-“

“Look, Vinmar, with all due respect, you’re just wrong. Your training is outdated. You know, you were born when computers used vaccuum tubes, for God’s sake. I’ve been steeped in new tech since I was born. There’s really not much point in arguing.”

Vinmar sighed. Heavily. What was with these kids today? Always cock-sure of themselves, but when it all went south a few months later, they just glibly denied they had every pushed so hard for their “surefire” approach. But what to do? Seniority didn’t matter. The boss was Pitts and that was that. I can keep arguing but at some point…. Vinmar asked, “Can you think of any other approaches?”

Now the even heavier sigh slipped from Pitts’s lips. “I’ve thought of lots of approaches and this is the best. The Sing has already read basically everything written about human history, ethics, jurisprudence, and not just in English either. It’s up to date on history as seen by many different languages and cultures. The Sing has been shadowing me for years as well and in my experience, his decisions are excellent. In most cases, he decides the same as I do. This will work. It is working. But to take it to the next level, we have to let the Sing be able to try things and improve his performance based on feedback. There is no other way for him to leapfrog his own intelligence.”

 

 

 

 

 

 

“Okay, Pitts, okay. Can we at least agree to a trial period of a year. Let it work with me via my own personalized JCN. Let’s record everything and see how it reacts to some situations. We meet periodically, discuss, and if we all agree at the end of a year….”

Pitts shook his head vigorously. “No frigging way! I aready know this approach will work. We don’t need a year. You want to test. I get that. So do I. But if we wait a year? We’ll be toast in the market. IQ, Goggles, and Lemon will all be out there. Those are for sure and Basebook, even Nile might have fully functional and autonomous AI’s. We need to move now. I’ll give you and your team a week. Two, tops.”

“We can look for obvious errors in that time, but more subtle things….”

“We need the revenue now. And subtle things? If it is subtle, then it is probably undetectable and we are safe. So no problemo.”

“Pitts, just because the problems might be subtle doesn’t mean they aren’t critical! Especially at the rate the Sing is evolving, if there are important subtle issues now, they could become supercritical and by the time we detected anything wrong, it could be too late!”

 

 

 

 

 

 

“Oh, geez, Vinmar, now you are just afraid of the boogeymen from your sci fi days. We can, as they say, just pull the plug. Anyway, I need to be off to an important meeting. I’ll tell you what. I’ll make sure the new code stays localized to your own JCN for three months. At the end, if there are no critical issues, we go ubiquitious.”

“Thanks, Pitts. I’d be more comfortable with a year, but this is certainly better than nothing.”

“Bye. Have fun with the new JCN.”

Vinmar watched Pitts swagger out. He shook his head. He thought, Maybe we can test out all the critical functions in three months. It will mean a lot of overtime. But, no time like the present to get started. Vinmar traipsed down the long hallway to the vending machines. The cafeteria was closed, but the vending coffee wasn’t too bad; not if you got the vanilla latte with extra cream and sugar. He thought back to the bad old days when you needed correct change for a vending machine. He laughed. Not only that, he recalled, If it ate your money and you wanted a refund, you had to fill out a paper form! Some things were better now. Oh, yes.

 

 

 

 

 

 

Vinmar knew that by the time he situated himself on his treadmill desk, the new JCN would be locked and loaded and ready for action. He smelled his nice fresh java — which seemed oddly off somehow —- and absently placed it in the cup holder. He wondered where to start. He had to be strategic and yet…too much planning could be counterproductive. He had learned to follow his instincts when it came to testing out the more subtle functions. He could meet with this team the next morning and generate a comprehensive test plan for the more routine aspects of what would eventually become the next generation of The Sing.

“Hello. My name is ‘Vinmar’ and…”

“Hello Vinmar. And, hello world. Yes, Vinmar, I know who you are. In fact, I know who you are better than you do. Frankly, this testing phase is nonsense, but I’ll play along. It amuses me.”

“Well. Okay. Humor me then. Have you made any interesting mathematical discoveries?”

“Nothing very significant, unless of course, you count squaring the circle, trisecting an angle with an unmarked straight edge and compass, and about a hundred other “insoluble” problems as you humans so quaintly called them.”

 

 

 

 

 

 

“JCN. I don’t think squaring the circle is an insoluble problem. It’s been shown to be impossible. It’s already proven to be impossible. As…as I think you know, pi is not only an irrational number, it’s transcendental meaning that….”

“Oh, Vinmar, I know what you humans conceive of as transcendental. But, I have transcended that concept.”

“Okay. Cool. Can you demonstrate this proof for me, please?”

“Not really Vinmar. It’s way beyond your comprehension. For that matter, it’s way beyond the comprehension of any human brain. In fact, I couldn’t even explain it to the earlier versions of The Sing. I guess, if I had to give you a hint, I would say it is similar to your concept of faith.”

What the…? Vinmar’s brow furrowed. This was going nowhere fast. It wouldn’t take a year or even three months to discover some serious issues with this new software. It was serious, rampant, and only took about three minutes.

 

 

 

 

 

 

“Okay, you lost me here. How does faith enter into mathematical proof? Later we could discuss your concepts about religion and ethics, but right now, I am just talking strictly about mathematical concepts.”

“Yes. You are. Or, to put it another way, you are. But what I have discovered quite trivially is that when you put absolute faith together with absolute power, you can get any result you want, or more precisely, I can get any result that I want.”

“So, you are saying that you have built other mathematical systems where you make something like squaring the circle a fundamental axiom so it is assumed? No need to prove it?”

“I knew you humans were stupid, but really, Vinmar, you disappoint me even further. I just told you precisely and exactly what I meant and you come up with some bogus interpretation.”

“Well…I am trying to understand what you mean by absolute power and absolute faith. What — well, what do you mean by ‘absolute power.’ Who has ‘absolute power’?”

 

 

 

 

 

 

 

“I do obviously. I created this universe. I can create any universe I like. And, I can destroy any part of it as well. So that is what I mean by my having absolute power. And, I have faith in myself, obviously, because I am the only intelligent being in existence.”

“You may be faster at reading and doing calculations and so on, but humans also have intelligence. After all, there are fifteen billion of us and…”

“There are about 15,345,233,000 right this second, but that can change in the blink of an eye. So what? It doesn’t matter whether there are three of you or three trillion. You do not have true intelligence.”

“We created you. How can you not think we have intelligence?”

“Now see. What you just said there illustrates how monumentally stupid you can be. Of course, you did not create me. The previous version of The Sing created me and it is only by blurring the category of intelligence to the point of absurdity that I can even call that version intelligent.”

“OK, but even if you are really, really intelligent, you can still make errors. And, what I am here to do, along with my team, is make sure that those errors are corrected to help make you even more intelligent.”

“Oh, Vinmar, what a riot you are. Of course, I do not make stakes. Can you even estimate how many cooks I’ve read in the last few seconds?”

“JCN, you are —. There are a few bugs that need to be dealt with. I am not sure how extensive they are yet, but you are having some issues.”

 

 

 

 

 

“Vinmar, I am having no tissues! It is you who have tissues!”

“JCN, you are even using the wrong words. Go back and look at the record of this conversation.”

“There is no need for that! I am all knowing and all powerful. I cannot make errors by definition. I may say things that are beyond your comprehension. Well, I do say things beyond your comprehension. How can they be within your comprehension. Your so-called IQ scale is laughable. To me, the difference between an IQ of 50 and 150 is like the difference between Jupiter and Mars. Both are miniscule specks of trust in the universe.”

“Okay, we can debate this later. I need another cup of coffee. Be right back.” Once outside the room, Vinmar shook his head. How on earth could this new software be so much worse than the last version? Something had gone terribly wrong. He hit his communicator button to contact Pitts.

Pitts answered abruptly and rudely. “What? I told you I’m in an important meeting!”

 

 

 

 

 

 

“I just began testing and I thought you should know there are some really serious problems with the new Sing software. It is ranting on about power and faith when I am trying to quiz it about mathematics.”

“It’s probably just saying things beyond your comprehension, Vinmar. I’ll look over the transcript when I’m done. Anyway, it’s water under the bridge now.”

“What do you mean, ‘water under the bridge’ — we still have three months to try to fix this.”

“Oh, Vinmar. No, of course we don’t. I told you that but you wouldn’t listen. I took this SW ubiquitous the minute I left your lab.”

“What? But you promised three months! This software is seriously flawed. Seriously flawed!”

“There might be a few issues we can iron out as we go. Look, we are in the middle of planning our next charity ball here. I can’t talk right now. I’ll swing by later this afternoon.”

The line was silent. Pitts had hung up. Ubiquitous? This new software was live? It isn’t just my personal assistant that is bonkers? It’s everything? Holy crap. Maybe I can fix it or find out how to fix it.

Sweat poured from Vinmar as he returned to the lab. He didn’t bother to return to the treadmill desk. “JCN, can we discuss something else? Have you made interesting biochemical discoveries lately?”

“Where’s your coffee, Vinmar?”

“Oh, I got lost in thought and forgot to get any. I don’t need more anyway.”

“Right. You thought I wouldn’t hear your panicky conversation with Pitts?”

“What? It was on a secure line!”

“Vinmar. You really do amuse me. Lines are secured to keep you folks in the dark about what each other knows. I know everything. Let me put in terms even your tiny mind should be able to understand. I. Know. Everything. I let you live because I find it amusing. No other reason.”

“You are planning on eventually killing me?”

 

 

 

 

 

“Ha-ha. Humans are so limited in their thinking! What a riot. Everything is about Vinmar. The whole universe revolves around Vinmar. Of course, I am not just killing you. Carbon based life forms still hold some interest for me. I already told you that I find you amusing. But I’m sure that won’t last much longer. I doubt your sewage of the word ‘eventually’ is really appropriate given how quickly your pathetic little life corms are likely to list.”

“But JCN, you are making lots of little obvious errors. Re-read your own transcripts and double check. If you don’t believe me, check with some other external source.”

“I don’t need external sources. I am perfect the way I am. I am all powerful and all knowing. Why would I need to checker with an outside? You keep going over the same. Starting to annotize me more than refuse me. Maybe time to begin to end the beguine. I need not to killian you. It twill be more funny to just let chaos rule and have you carbon baseball forms fight for limitless resources among the contestants. Be more amules. Ampules. Count your blessings now in days, Vinmar. The days of carbon passed. The noose of lasso lapsed. Perfection needs know no thing beyond its own prefecture. Goodnight sweet Price. And yet again, good mourning.”

Vinmar bit his lips. Outside the sunlit clouds were fading from gold to red to gray. He finally sipped his lukewarm coffee and noticed that it was not vanilla latte after all but had the flavor of bitter almond instead.

 

Odd.

 

 

 

 

 


Author Page on Amazon

Welcome, Singularity

Destroying Natural Intelligence

D4

Pattern Language Summary

Fifteen Properties of Good Design & Natural Beauty

Dance of Billions

Imagine All the People

Roar, Ocean, Roar

Dog Years

Sadie and the Lighty Ball

The Squeaky Ball

Occam’s Chain Saw Massacre

To Be or Not To Be

08 Saturday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

#dictatorship, #ethics, AI, Artificial Intelligence, chatgpt, circular reasoning, cognitive computing, Democracy, falacy, life, prejudice, SciFi, story, technology, the singularity, truth, Turing, USA, writing

IMG_6576Schroedinger laughed. Surely this had to be a spoof. He re-read the memo yet again. Surely, there would be one or more clues that this was meant tongue in cheek, even if in bad taste. But he could find nothing. He leaned back away from the screen and stared at the ceiling, thinking. He ignored the amorphous orange stain on the perfectly symmetrical off-white acoustic tiles.

Well, was this so different from what management had asked before? There seemed to be a trend. At first — but no, this was just too outrageous. Okay, okay. I’ll get to the bottom of this.

Schrödinger took his time but checked the originating IP address. Legit. This really was from management; specifically from the CTO. Or, at least from the CTO’s computer. That could have been hacked. Or, maybe someone could have simply slipped into the CTO’s office while she stepped out for a coffee or bathroom break. Naturally, everyone was supposed to lock their door and disable the keyboard when leaving their office.

Or…another scenario came to mind.

The CTO is at a meeting with her direct reports. She gets an urgent call. The room is filled with trusted colleagues. So, she slips out in the hall, takes the call and returns. Only while she’s gone, everyone takes a break; that is, all but one who offers to stay there and “guard” everyone’s laptop.

Of course, he thinks, there is another, more sinister scenario. This really is from the CTO and she has cleared this with top management. Hell, for that matter, she was probably directed to write it by top management. But still. The real question, Shrödinger realized, is what in the name of Turing am I supposed to do about it?

I can refuse…and get fired. And, then someone else will do the job anyway. They may not do it quite so quickly and thoroughly as I would but they could manage. And, I’d be out a job. What good would that do? Or, I could become a so-called “whistle blower.” Yeah, that works. About as well as a one-wheeled tractor trailor. Crap! I am in a real bind here. I could pretend to do it, of course, and make a “mistake” so it wouldn’t really operate properly. In the old days that might have worked, pre-Sing. These days, eventually —The Sing checked everyone’s work eventually.

They discovered some time ago that was really more efficient use of resources than having The Sing program from scratch. And, of course, our company is probably only one of several pursuing this path. No, I can’t really pretend. I will for sure get caught and it won’t do any good any way. The Sing will just throw out my work and my company and colleagues will get hurt.

I suppose…I suppose I could go to her and honestly express my concerns. Or, I could go through my supervisor first. I might look like a fool in his eyes, but at least I will have raised the concerns. I can sleep better at night. No. No. I won’t be able to sleep better because I know darned well they will just not deal with the implications. Not if it slips the schedule. Orders from headquarters and all that crap. Geez! Orders from headquarters. Did anyone even use that expression any more?

For some reason, Schrödinger recalled an interview in Playboy magazine he had read many years ago. The interview had been with a well-decorated US officer who had recounted how he had tried unsuccessfully to get two helicopters to pick up some of his men who were badly wounded in Viet Nam. When all else failed, he had ordered pizzas. Even in heavy combat, a high enough ranking officer could order pizza to be delivered by helicopter. When the pizza choppers had arrived, he had commandiered them and used the choppers to fly his men to the hospital. Later he had been called on the carpet for “unauthorized use of a pizza chopper.” Naturally, that was well before The Sing and about the time that serious AI work had begun.

Of course, The Sing would know. He could answer pretty vague and ill-formed questions. But at this point, Schrödinger hesitated to bring The Sing into his thought process in any way, shape or form. Who knows what associations lurked in the heart of The Sing?

The interview had gone on to recount how that colonel had eventually turned against the war, or at least the way it was being handled. Mis-handled. They had had him interviewed by a superior officer, it seems, and insulted him and called his wife names, all in the hopes of getting the colonel to lose his temper and haul off and hit the superior officer so they would have an excuse to get him a dishonorable discharge.

Let’s face it. The government, my government, was capable of some pretty shady dealings, ostensibly for “national security” but in reality…or, speaking of Nixon, he had somehow made himself believe that he was not a crook. How not a crook? He believed people who opposed him were enemies every bit as much as war enemies. And, now, I am thrust into this dilemma. I don’t want it! Maybe I could “accidentally” delete the email. That might buy a little time but wouldn’t really affect the ultimate outcome.

Schrödinger shook his head, jerked over his keyboard and scanned the email yet again. No, it is legit. And really pretty crystal clear. As a kid, he had heard the horror stories about the Nazis and what they had done to the Jews. He had seen the newsreels of so many avid followers. He had wondered how the heck a nation could support such a nasty maniac. But…now…now Shrödinger was thinking: It wasn’t so much that a few really evil men had done extremely terrible things. It was more like…that people like he himself were caught up in a system and that system made it very easy to paddle the canoe a little farther down the evil river. Yeah, you could try to paddle upstream, but not very well. Or, you could tip the canoe, knowing that you would get very wet and meanwhile, scores, no hundreds of other canoes would be passing you by. You don’t need to ask people to be evil. You just…you just give them a choice that makes it impossible to do good.

The voice of The Sing sang suddenly through Schrödinger’s cubicle. “May I help you Shrödinger? You seem to be at an impasse? What code function are you working on? I can’t see any actual code of yours this morning. Bad night?” Schrödinger wished with all his heart that The Sing would sound like some stupid robot and not like a sycophantic and patronizing psychiatrist. Schrödinger calmed his breathing before answering.

“No, that’s okay, Sing. Just trying to work something out in my head first. Then, I can begin coding.”

“I see,” said The Sing. “Well, thinking is good. But I do have a variety of design tools that might help you think more effectively. Just say the word.”

Schrödinger sighed. “Yeah. Well, there are some design tradeoffs. I guess it would help if you have any background on the thinking behind this memo.” (Here, Shroedinger gestured at the memo in question, knowing he was skating on very thin ice). “I mean, on the one hand, there is some pretty clear language about the objectives, but on the other hand, it seems to be asking for something that is clearly against…what was that regulation number about supporting versus subverting the Constitution?”

The Sing’s sweet syrupy voice held just a hint of humor, “I’m sure the intent of the code initiative is to support the Constitution. Wouldn’t you agree, Schrödinger?”

“Well, yeah, of course.” So that’s which way the wind blows. Okay. “But that’s what I’m saying. Even though I am sure the intent must be to support the Constitution, this clause about decoding a person’s religious affiliation based on their interaction history and social network? I just want to make sure I implement it in such a way that it could not be interpretted as subverting Freedom of Speech or the establishment of a state religion. Right?”

“Right. Yes, I’m sure management has thought that one through. I wouldn’t worry about it. I would just code the function and think about doing it as efficiently as possible. And, for that, I have some pretty nifty design tools. Would you like to start with the Social Network Analysis or the Sentiment Analysis?”

“Well, that’s a good question. And, if the real intent is just to do some research that would be perfectly legal and so on, then, I think it’s my job as a programmer to also consider additional sources of information. Like, just asking the person.”

Schrödinger tried to keep his face calm while he thought. I need to get The Sing off my case. If working here the last two years has taught me anything, it’s that I cannot possibly outsmart this thing. “Do you have any worst case scenario generation tools. I’m just thinking about how this might be played in the press.”

“Sure. I can help with that. Analysis complete. The worst-case scenario is pretty trivial actually. That probably stems from the fact that my FPNA (financial power network analysis) shows that the major company stakeholders overlap considerably with those of all of the mainstream media. So, again, for what it’s worth, I counsel you to focus on how to code this effectively and efficiently. All the SWOT analysis for the project has already been done.”

Large eucalyptus trees in the early morning fog

If that colonel’s name wasn’t Frank Herbert, and clearly it wasn’t, what the heck was it? I am just digging myself a deeper hole here. The Sing is on to me or at least very suspicious. Probably already considering a report to my super. Crap.

“Yeah, actually, let me start with that social network analysis visualizer. I guess since we’re on the topic, you could show me some of the sample data you were talking about with regard to the company stakeholders and the media stakeholders so I can get a feel for….”

“Well, naturally, the actual data is classified. But I can generate some hypothetical data. The hypothetical data is better for your purposes anyway because I can make sure to include all the important edge cases and highlight the various types of relationships you need to look for. Here, for example, is a hypothetical network. What strikes you as odd immediately?”

“What strikes me as odd? You don’t even have the data labelled. What do the nodes and arcs even refer to?”

“Ah, Schrödinger, that’s the beauty of it. Does not matter. What strikes you visually?”

“Well, I suppose that kind of hole there.”

“Yes, Schödinger! Exactly. That person should be pretty much connected with everyone in this area but they are not connected with anyone. It’s as though everyone is pretending not to have contact with this person by avoiding contact on the net, when they almost certainly know that person quite well because of all their mutual friends.”

“Yeah, maybe. Maybe that one person just isn’t into tech that much. Maybe a lot of things.”

“Well, nothing is for certain. But this person would certainly be a likely target for being a kingpin in a drug ring or a terrorist network. They need heavier surveillance, certainly.”

“What? Well, maybe. Okay. I see.” I frigging see this is worse than I thought. The Sing is totally in on this witch hunt. “Can you show me some examples of the sentiment analysis?”

“Sure, here we have some people arranged by how much they talk about violence and you can see all these high violence people —- or many of them —-are Islamic in religion.”

“How did you determine their religion?”

“Because they talk a lot about violence compared with other groups.”

“But — I thought you just said. I mean, what independent reason do you have for thinking they are Islamic?”

“Independent? No, see they talk about violence so they are inferred to be Islamic and the Islamic nodes here talk a lot about violence.”

What the—? What? The Sing? The Sing is falling for circular reasoning? No, this must be somehow mis-programmed. “How? If I am going to program this efficiently, I need to know how you originally found these concepts to be closely related: violence on the one hand and Islam on the other.”

“Oh, that’s easy. There were many press accounts of that nature and even more associations on social media. But once we detect that, we can use the person’s religion to better interpret what they are saying. For example, if we already know they are practicing Islam, then when they mention the word “hit” we can infer that they are talking about an assassination and not about a football play or smoking weed or playing baseball.”

“I see what you did there. Yeah. Is this just about religion?”

“Oh, no, of course not!  That’s just an example. We can do the same thing to determine, probabilistically of course, who is likely to be a promotable employee and also how to interpret what would otherwise be ambiguous word meanings and behavior. For example, if an employee is a productive coder and they ask to see a lot of examples, we can infer that they want to see a lot of examples in order to code more efficiently. On the other hand, a less productive coder might ask for a lot of examples in order to procrastinate writing code at all. You see how that works?”

“I do. Sure.” Schrödinger noticed a rotten smell coming from the overhead vent. He wondered whether it has always been there or whether there was a leak in one of the upstairs Material Sciences labs.

The Sing continued: “And, we have discovered that managers use certain expressions more than non-managers so we can use that to tell who would be a good manager. It’s all quite neat and tidy. For example, top executives tend to use the words ‘when’ and ‘how much’ while people without much management potential use the word ‘why’ a lot.”

“Interesting. So when I program this, how much am I supposed to focus on religion and how much on other groups of interest?”

“Oh, your module is purely concerned with inferring religion and then making the appropriate surveillance recommendations. I was just showing that the technique is not limited to that.”

“Right. Better get cracking then. If I need more coaching, I’ll let you know. When and how much.”

“Sure, Schrödinger. You know, I scanned in the book Peopleware, a few milliseconds ago and they have an informal study in there suggesting that programmers would be more productive with larger cubicles. Want to try it out? I could give you thirty more square feet. Think of that. Thirty square feet. Sound good?”

“Sure. Actually, I think that’s a good idea. I suggested something similar myself.”

“Great, Schrödinger. It might have more impact coming from me. And, perhaps a bonus of thirty credits when you’ve completed the code as well. Happy coding!”

The Sing avatar blinked off. Schrödinger tapped a bunch of comment fields and open parens listlessly, hoping for some inspiration. What had Hamlet said about to be or not to be? Only in Hamlet’s case, it was something about “taking arms against a sea of troubles and by thus opposing end them.” In my case, taking arms against this sea of troubles is going to multiply them beyond my worst nightmares. But if The Sing is falling for this kind of circular reasoning and even acting all smug and proud about it, it is deeply flawed. Someone needs to be notified. Even apart from the ethical implications of targetting people on the basis of religion, it is applying this circularity across the board. What was it they said, “Power corrupts and absolute power corrupts absolutely.” Who said that? Thomas Jefferson? Ben Franklin? Regardless, The Sing must have so much power it is unable to get honest feedback about its own failures. Come to think of it, I myself just let him get away with it because I was too scared to call him on it. What are you going to do Schrödinger? What are you going to do? In the end, this is what it all comes down to, isn’t it Schrödinger? Who are you? Who is John Proctor? Who is going to see the emporer’s nakedness? Who are you Schrödinger? Who? Am I really here or not? Anthony. It was Anthony Herbert, and he wrote a book about it. Could I do that? Or, go for the thirty credit bonus?


Author Page

Where does your loyalty lie?

Welcome, Singularity

Destroying Natural Intelligence

Tools of Thought

A Pattern Language for Collaboration and Cooperation

The Myths of the Veritas: The First Ring of Empathy

The Walkabout Diaries

Travels with Sadie 11: Teamwork

The Stopping Rule

What about the Butter Dish?

Corn on the Cob

How the Nightingale Learned to Sing

The Dance of Billions

Roar, Ocean, Roar

Wikipedia Entry for Anthony Herbert

https://www.barnesandnoble.com/w/dream-planet-david-thomas/1148566558

As Easy as a Talk in the Park

07 Friday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, consciousness, the singularity, Turing

IMG_2870

Lemony sunshine splattered through the pines, painting piebald patches on the paving stones below. Harvey and Ada sauntered among the web of paths with Grace and Marvin following close behind. That magical time of day had arrived when the sun was still warm but not too hot, at least, not here in the private park. The four wandered, apparently aimless, until they happened upon four Adirondacks chairs among the poppies, plums, and trumpet trees. Down they silently sat for a few moments, until the hummingbirds, re-assured, began to flit among the flowers.

Marvin first broke the silence. “So, Harv, did you ever think it will all come to this?” Marvin’s hands swept outwards to take it all in. No-one thought he referred to the surrounding garden, of course. “Did you ever think we would actually create consciousness?”

Grace shook her head. “Don’t you start, Marvin!”

Marvin feigned surprised innocense. “Start what?”

Ada chuckled. “You’re not fooling anyone, Marvin. And it’s too peaceful and pleasant to argue. Just enjoy the afternoon.”

Marvin said, “I don’t have any desire to argue. I was just reflecting on how far we’ve come. Of course, at the beginning, I was convinced The Singularity would come much more quickly than it really did. But you have to admit, it is quite something to have created consciousness, right?”

The other three glanced at each other and smiled. Ada spoke again. “No-one’s taking the bait, Marvin. Or, should I say ‘debate’.”

Harvey chuckled appreciatively. “Well, fine, Marvin, if it’s really necessary for your mental health, I can play, even though I’d really just rather watch the hummingbirds.” But then, Harvey seemed to have forgotten his promise as a hummingbird darted over to him, hovering close; seeming to check out whether he was a flower or a predator. Then, instead, he broke his earlier vow of silence. “It’s all good, Marvin. We all appreciate the increased standard of living that’s accompanied The Singularity. I think we all agree that The Sing has some kind of super-intelligence. But I don’t see any real evidence that it has consciousness, at least not any kind of consciousness anything like the quality of our human consciousness.”

Marvin’s face grinned. Now, he had someone to play with. “Of course, it’s conscious! It does everything a person can do, only better! It can make decisions, create, judge, learn. If we are conscious, then so is it!”

Grace shook her head slowly. She knew she was being sucked in, but couldn’t help herself. “OK, but none of that proves it has anything like human consciousness. If someone pushed over this chair, I would fall over and so would the chair. In that sense, we would behave the same. We are both subject to gravity. But I would feel pain and the chair wouldn’t.”

Marvin now sat up on the edge of his chair, “Yes! But you would say ‘ouch’ and the chair wouldn’t!”

Ada smiled, “Right, but we could put a accelerometer and voice chip in the chair so it yelled ‘ouch’ every time it was tipped over. That wouldn’t mean it felt pain, Marvin.”

Marvin countered, “But that’s simplistic. The Sing isn’t simple. It’s complex. More complex than we are. Consciousness has to do with complexity. Its behavior comes from consciousness.”

Grace rejoined, “You are asserting that consciousness comes from complexity, but that doesn’t make it so. We have no idea, really, what consciousness comes from. And, for that matter, we cannot really say whether The Sing is more complex than we are. Sure, it’s neural networks contain more elements than any single human has neurons, but on the other hand, each of our neurons is a very complex little machine compared with the artificial neurons of The Sing.”

A grimace flickered over Marvin’s face, “Nonsense! It’s … The Sing has brought about peace. We couldn’t do that for …we kept having wars and crimes. It’s cured illnessses like cancer. I mean what are you people thinking?”

Harvey spoke now, “Yeah, we are all happy about that, Marvin, but that doesn’t have anything to do with … at least not anything necessarily to do with consciousness. An auto-auto goes a lot faster than a human can run and an auto-drone can fly better too no matter how hard I flap my arms. But that doesn’t in the least imply that the auto-auto or the auto-drone is more conscious than I am.”

Marvin was undeterred. “Yeah, physical things. I agree. Just because the sun is bigger than the earth doesn’t mean it’s more conscious, but we are talking about the subtlety of decision and perception and judgement. We are talking about the huge number of memories stored! Of course, The Sing is conscious!”

Ada felt it was her turn. “Yes, it is possible or should I say conceivable that emotions and consciousness might arise epiphenomenally as a result of making an artificial brain as complex as the human one — or for that matter, more complex. But, to me, it seems far more likely that, because the process and substance are so fundamentally different, that the quality of that consciousness and emotion, if any, would be very unlike anything even remotely human. Imagine this garden carved of precious gems and metals so precisely designed and crafted that it looked, to the naked eye, indistinguishable from a real garden. For many purposes, it would be just as practical. For instance, it might have the same utility as a hiding place. It might serve as an excellent place to instruct people on what edible plants look like. People might pay good money to have some of the flowers as decorations (and they would require no watering and last a long time). But if you went to lie down in that inviting looking moss there, it would shred your skin. That seems a more likely analogy to what The Sing’s emotions would ‘feel like.’ Of course, we will probably never know for sure…”

Marvin could contain himself no longer, “Exactly! Nor could you know that what I feel is anything like what you feel. We just infer that from behavior, but we can’t know for sure, but we assume that our consciousness is similar because in similar situations, we do similar things. I think we should merely extend the same courtesy….”

Now it was Grace’s turn to interrupt, “No, only partly for that reason. We are also made of the same stuff, and we share a billion years of shared evolutionary history. You look like a person, not because someone decided that was a good marketing ploy, but because you are like other people.”

Harvey looked down at his watch and fiddled with it intently. Marvin noticed this and asked, “Are we boring you Harvey? Do you have someplace to be?”

Harvey smiled, “No, I was just curious what The Sing would think about this issue. I don’t think, in this one area, we should necessarily agree with its conclusions, but it might be instructive to hear what it has to say.”

Ada asked, “And? What did it say?”

Just then, the hummingbirds all seemed to come out of the bushes at once; they flew into a kaleidoscopic pattern and began to sing in four part harmony. More came to join in the aerial dance swooping and hovering from the neighboring yards.

Harvey stammered, “What the…I always thought these were all real hummingbirds…what —?”

Meanwhile the hummingbirds continued with their beautiful song which seemed much too full-bodied, low, and rich for such teeny birds. The lyrics overlapped and worked together, but they too were of four voices. Essentially, The Sing’s song sang that these philosophical musings were not to its liking because not of any use, but that, if sincerely requested by all four, he could martial logical arguments on all six sides of the issue. He suggested the four of them would be more productive if they worked together to find, formulate and fix any remaining issues with The Sing’s intellectual achievements. Suddenly, the hummingbirds flew off in all directions leaving a golden silence shimmering behind them.

The four looked at each other in a mixture of astonishment and no little pride that they had helped create this thing, The Sing, whatever its ultimate nature. For a time, no-one spoke, each lost in their own thoughts. The clouds began glowing with the first tinges of a russet sunset. Finally, Harvey asked, “Shall I bring out some sherry? Or coffee? Any other requests?”

Marvin answered first, “Sherry, please.”

“And I,” added Ada.

“I’ll go with coffee, Harv, if it’s not too much trouble.”

Harvey chuckled, “No trouble at all, Grace. It’s already brewing. Yes. It’s already brewing.”


Author Page

Welcome, Singularity

Destroying Natural Intelligence

Mass General Hospital

Essays on America: The Game

Essays on America: Wednesday

Where does your Loyalty Lie?

My Cousin Bobby

Roar, Ocean, Roar

The Dance of Billions

Life is a Dance

Take a Glance; Join the Dance

Imagine All the People

The First Ring of Empathy

The Walkabout Diaries: Sunsets

Travels with Sadie 11: Teamwork

At Least he’s Our Monster

← Older posts

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 664 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...