• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: consciousness

And, then what?

16 Tuesday Dec 2025

Posted by petersironwood in creativity, psychology, Uncategorized

≈ Leave a comment

Tags

AI, Business, chatgpt, consciousness, consequences, Democracy, Feedback, innovation, learning, life, science, testing, thinking, USA

And then what? 

IMG_5566

When it comes to increasing the drama in TV crime shows, westerns, and spy thrillers, both the brilliant, evil villain and the smart, brave, good-looking protagonist display one common and remarkable weakness: they rush into action without much thought as to the possible consequences of their actions. 

Here’s a scene that you and I have probably seen a thousand times. The hero has a gun drawn and a bead on “The Evil One” but the Evil One has a knife to the throat of the friend or lover of The Hero. The Evil One, as both we in the audience and The Hero know, cannot be trusted. Most likely, The Evil One has caused the death of many people already, is treacherous, and lies as easily as most people breathe. Nonetheless, The Evil One promises to release the hero’s friend or lover provided only that The Hero put down their gun and slide it over to The Evil One. And The Hero complies! Often, The Hero will elicit a “promise” from The Evil One: “OK, I’ll give you my gun, but you have to let them go!” The Evil One, for whom promises mean nothing, “promises” and then The Hero slides the gun over. At this point, The Evil One is obviously free to kill both The Hero and their friend or lover immediately. Instead, The Evil One will begin chatting them up. This allows time for magic, skill, accident, God, unknown allies, or brilliance to turn the tables on The Evil One.

 

 

 

 

Here’s another scene that we’ve both witnessed. The Hero suddenly finds out some crucial piece of information that lets them know the whereabouts of The Evil One. Often this is an abandoned warehouse filled to the brim with minions of The Evil One. But, it might be the cave deep beneath the island stronghold of The Evil One; a stronghold filled to the brim with his minions. The Hero rushes in with a woefully inadequate force and without informing anyone concerning his whereabouts. He or she confronts The Evil One who not only confesses to past misdeeds but outlines their future plans to The Hero as well. 

abandoned architecture building concrete

Photo by Rene Asmussen on Pexels.com

In the TV series or the movies, the sequence of events is determined by the writer(s) so even though The Hero faces impossible odds, he or she will almost certainly overcome those impossible odds. That makes for an exciting story!

But in life? 

In real life, you’ll typically do a lot better if you think about the likely consequences of your actions. 

Sometimes, people fail to do this because they have simply never developed the habit of thinking ahead. 

Sometimes, people let their wishes completely color their decisions. For instance, an addicted gambler, despite their actual experience, believes that gambling more will result in a favorable outcome for them rather than the truth which would be that there is an extremely small chance that they will win overall. 

Sometimes, people are too ignorant to realize that there are potential negative consequences. For instance, when I was a youngster, I had a “glow in the dark” watch and cross; each glowed partly because of radium. I enjoyed putting these right up to my eyes in order to observe the flashes of individual photons. I also put together model airplanes with glue. When I applied too much glue, I dissolved it with Carbon Tetra-choloride. I loved the exotic smell of Carbon Tet. Now, it is deemed too dangerous to be used in this way. 

flight flying airplane jet

Photo by Pixabay on Pexels.com

In many cases, it seems to me that people do think about consequences but use an overly simple model of reality on which to base their predictions. In particular, people often treat individuals and social systems as mechanical systems and base their decisions on those mechanical models rather than actuality. For example, your kid does not, in your opinion, eat enough broccoli so you simply force them to eat broccoli. Your “prediction” of the consequences of this may include that the kid will eat more broccoli, be healthier, eventually like broccoli, etc. Depending on the individual child, it may be that none of these will actually occur. In some cases, it may even happen that the exact opposite of your goals will be achieved. The kid may eat less broccoli, be unhealthier, and hate broccoli more than ever. There are many other possible consequences as well. The kid may end up hating meals with the family or hating you or hating the color green. 

When it comes to individuals and social systems, it is hard to know what the net effect might be. Often though, the most significant cognitive problem that people have is that they are so sure of their prediction that they base their actions on what they think should happen rather than what actually does happen or what might happen. 

As recounted in some detail in the Pattern, “Reality Check,” instituting a new social reward or punishment system often does indeed change behavior, but not necessarily in the desired manner. If, for instance, programmers are now rewarded on the basis of lines of code written, they might indeed write more lines of code but many of those lines of code may be unnecessary. You might write 1000 lines of code or you could spend time thinking about the problem and then write two lines of code that accomplish the same result. Will you do so if you are only rewarded 1/500 th of the bonus?  

man wearing brown suit jacket mocking on white telephone

Photo by Moose Photos on Pexels.com

Similarly, you may measure the performance of service technicians by how many calls they “handle” during their shift. But if that is the main or sole measure, you may end up having those service people tend to offer trivial or even useless advice based on insufficient information. In all these cases, if management keeps seeing what really happens, any damage done by having an inaccurate predictive model of what will happen as a result of a change will be mitigated. But in a system, whether private or governmental, where people are mainly motivated to keep management happy by telling them what they want to hear, instead of correcting a poor intervention, the problems caused by inadequate models will tend to multiply, fester, or explode. 

So: 

Think of possible consequences and try to determine which ones are most likely. Then, observe what really does happen. This helps avoid turning an issue into a disaster and, over time, it also helps you develop more realistic models of reality. It will also tend to put you in the habit of taking a flexible and reality-based approach to your decisions rather than one that is based on a rigid and inaccurate model of how things should be. The latter approach to decisions will not only make you individually ineffective; it will also make it almost impossible to work well with others (unless everyone involved shares the same inaccurate model). 

IMG_9333

Author Page on Amazon. 

The Update Problem

Essays on America: Labelism

Essays on America: The Game

The Invisibility Cloak of Habit

Timeline for RIME

Guernica

There Never was a Civil War

The Crows and Me

After All

At Least He’s Our Monster

The Siren Song

Occam’s Chain Saw Massacre

Math Class: Who are you?

The First Ring of Empathy

Sadie and the Lighty Ball

Roar, Ocean, Roar

Life Will Find a Way

Cancer Always Loses in the End

Turing’s Nightmares: Chapter Three

11 Tuesday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, consciousness, ethics, philosophy, Robotics, technology, the singularity, Turing, writing

In chapter three of Turing’s Nightmares, entitled, “Thanks goodness the computer understands us!,” there are at least four major issues touched on. These are: 1) the value of autonomous robotic entities for improved intelligence, 2) the value of having multiple and diverse AI systems living somewhat different lives and interacting with each other for improving intelligence, 3) the apparent dilemma that if we make truly super-intelligent machines, we may no longer be able to follow their lines of thought, and 4) a truly super-intelligent system will have to rely to some extent on inferences from many real-life examples to induce principles of conduct and not simply rely on having everything specifically programmed. Let us examine these one by one.

 

 

 

 

 

 

 

There are many practical reasons that autonomous robots can be useful. In some practical applications such as vacuuming a floor, a minimal amount of intelligence is all that is needed to do the job under most conditions. It would be wasteful and unnecessary to have such devices communicating information back to some central decision making computer and then receiving commands. In some cases, the latency of the communication itself would impair the efficiency. A “personal assistant” robot could learn the behavioral and voice patterns of a person more easily than if we were to develop speaker independent speech recognition and preferences. The list of practical advantages goes on, but what is presumed in this chapter is that there are theoretical advantages to having actual robotic systems that sense and act in the real world in terms of moving us closer to “The Singularity.” This theme is explored again, in somewhat more depth, in chapter 18 of Turing’s Nightmares.

 

 

 

 

 

 

 

I would not argue that having an entity that moves through space and perceives is necessary to having any intelligence, or for that matter, to having any consciousness. However, it seems quite natural to believe that the qualities both of intelligence and consciousness are influenced by what is possible for the entity to perceive and to do. As human beings, our consciousness is largely influenced by our social milieu. If a person is born or becomes paralyzed later in life, this does not necessarily greatly influence the quality of their intelligence or consciousness because the concepts of the social system in which they exist were founded historically by people that included people who were mobile and could perceive.

Imagine instead a race of beings who could not move through space or perceive any specific senses that we do. Instead, imagine that they were quite literally a Turing Machine. They might well be capable of executing a complex sequential program. And, given enough time, that program might produce some interesting results. But if it were conscious at all, the quality of its consciousness would be quite different from ours. Could such a machine ever become capable of programming a still more intelligent machine?

 

 

 

 

 

What we do know is that in the case of human beings and other vertebrates, the proper development of the visual system in the young, as well as the adaptation to changes (e.g., having glasses that displace or invert images) seems to depend on being “in control” although that control, at least for people, can be indirect. In one ingenious experiment (Held, R. and Hein, A., (1963) Movement produced stimulation in the development of visually guided behavior, Journal of Comparative and Physiological Psychology, 56 (5), 872-876), two kittens were connected on a pivoted gondola and one kitten was able to “walk” through a visual field while the other was passively moved through that visual field. The kitten who was able to walk developed normally while the other one did not. Similarly, simply “watching” TV passively will not do much to teach kids language (Kuhl PK. 2004. Early language acquisition: Cracking the speech code. Nature Neuroscience 5: 831-843; Kuhl PK, Tsao FM, and Liu HM. 2003. Foreign-language experience in infancy: effects of short-term exposure and social interaction on phonetic learning. Proc Natl Acad Sci U S A. 100(15):9096-101). Of course, none of that “proves” that robotics is necessary for “The Singularity,” but it is suggestive.

 

 

 

 

 

 

 

Would there be advantages to having several different robots programmed differently and living in somewhat different environments be able to communicate with each other in order to reach another level of intelligence? I don’t think we know. But diversity is an advantage when it comes to genetic evolution and when it comes to people comprising teams. (Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.)

 

 

 

 

 

 

The third issue raised in this scenario is a very real dilemma. If we “require” that we “keep tabs” on developing intelligence by making them (or it) report the “design rationale” for every improvement or design change on the path to “The Singularity”, we are going to slow down progress considerably. On the other hand, if we do not “keep tabs”, then very soon, we will have no real idea what they are up to! An analogy might be the first “proof” that you only need four colors to color any planar map. There were so many cases (nearly 2000) that this proof made no sense to most people. Even the algebraic topologists who do understand it take much longer to follow the reasoning than the computer does to produce it. (Although simpler proofs now exist, they all rely on computers and take a long time for humans to verify). So, even if we ultimately came to understand the design rationale for successive versions of hyper-intelligence, it would be way too late to do anything about it (to “pull the plug”). Of course, it isn’t just speed. As systems become more intelligent, they may well develop representational schemes that are both different and better (at least for them) than any that we have developed. This will also tend to make it impossible for people to “track” what they are doing in anything like real time.

 

 

 

 

 

Finally, as in the case of Jeopardy, the advances along the trajectory of “The Singularity” will require that the system “read” and infer rules and heuristics based on examples. What will such systems infer about our morality? They may, of course, run across many examples of people preaching, for instance, the “Golden Rule.” (“Do unto others as you would have them do unto you.”)

 

 

 

 

 

 

 

 

But how does the “Golden Rule” play out in reality? Many, including me, believe it needs to be modified as “Do unto others as you would have them do to you if you were them and in their place.” Preferences differ as do abilities. I might well want someone at my ability level to play tennis against me by pushing me around the court to the best of their ability. But does this mean I should always do that to others? Maybe they have a heart condition. Or, maybe they are just not into exercise. The examples are endless. Famously, guys often imagine that they would like women to comment favorably on their own physical appearance. Does that make it right for men to make such comments to women? Some people like their steaks rare. If I like my steak rare, does that mean I should prepare it that way for everyone else? The Golden Rule is just one example. Generally speaking, in order for a computer to operate in a way we would consider ethical, we would probably need it to see how people treat each other ethically in practice, not just “memorize” some rules. Unfortunately, the lessons of history that the singularity-bound computer would infer might not be very “ethical” after all. We humans often have a history of destroying other entire species when it is convenient, or sometimes, just for the hell of it. Why would we expect a super-intelligent computer system to treat us any differently?

Turing’s Nightmares

IMG_3071

Author Page

Welcome, Singularity

Destroying Natural Intelligence

How the Nightingale Learned to Sing

The First Ring of Empathy

The Walkabout Diaries: Variation

Sadie and The Lighty Ball

The Dance of Billions

Imagine All the People

We Won the War!

Roar, Ocean, Roar

Essays on America: The Game

Peace

As Easy as a Talk in the Park

07 Friday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, consciousness, the singularity, Turing

IMG_2870

Lemony sunshine splattered through the pines, painting piebald patches on the paving stones below. Harvey and Ada sauntered among the web of paths with Grace and Marvin following close behind. That magical time of day had arrived when the sun was still warm but not too hot, at least, not here in the private park. The four wandered, apparently aimless, until they happened upon four Adirondacks chairs among the poppies, plums, and trumpet trees. Down they silently sat for a few moments, until the hummingbirds, re-assured, began to flit among the flowers.

Marvin first broke the silence. “So, Harv, did you ever think it will all come to this?” Marvin’s hands swept outwards to take it all in. No-one thought he referred to the surrounding garden, of course. “Did you ever think we would actually create consciousness?”

Grace shook her head. “Don’t you start, Marvin!”

Marvin feigned surprised innocense. “Start what?”

Ada chuckled. “You’re not fooling anyone, Marvin. And it’s too peaceful and pleasant to argue. Just enjoy the afternoon.”

Marvin said, “I don’t have any desire to argue. I was just reflecting on how far we’ve come. Of course, at the beginning, I was convinced The Singularity would come much more quickly than it really did. But you have to admit, it is quite something to have created consciousness, right?”

The other three glanced at each other and smiled. Ada spoke again. “No-one’s taking the bait, Marvin. Or, should I say ‘debate’.”

Harvey chuckled appreciatively. “Well, fine, Marvin, if it’s really necessary for your mental health, I can play, even though I’d really just rather watch the hummingbirds.” But then, Harvey seemed to have forgotten his promise as a hummingbird darted over to him, hovering close; seeming to check out whether he was a flower or a predator. Then, instead, he broke his earlier vow of silence. “It’s all good, Marvin. We all appreciate the increased standard of living that’s accompanied The Singularity. I think we all agree that The Sing has some kind of super-intelligence. But I don’t see any real evidence that it has consciousness, at least not any kind of consciousness anything like the quality of our human consciousness.”

Marvin’s face grinned. Now, he had someone to play with. “Of course, it’s conscious! It does everything a person can do, only better! It can make decisions, create, judge, learn. If we are conscious, then so is it!”

Grace shook her head slowly. She knew she was being sucked in, but couldn’t help herself. “OK, but none of that proves it has anything like human consciousness. If someone pushed over this chair, I would fall over and so would the chair. In that sense, we would behave the same. We are both subject to gravity. But I would feel pain and the chair wouldn’t.”

Marvin now sat up on the edge of his chair, “Yes! But you would say ‘ouch’ and the chair wouldn’t!”

Ada smiled, “Right, but we could put a accelerometer and voice chip in the chair so it yelled ‘ouch’ every time it was tipped over. That wouldn’t mean it felt pain, Marvin.”

Marvin countered, “But that’s simplistic. The Sing isn’t simple. It’s complex. More complex than we are. Consciousness has to do with complexity. Its behavior comes from consciousness.”

Grace rejoined, “You are asserting that consciousness comes from complexity, but that doesn’t make it so. We have no idea, really, what consciousness comes from. And, for that matter, we cannot really say whether The Sing is more complex than we are. Sure, it’s neural networks contain more elements than any single human has neurons, but on the other hand, each of our neurons is a very complex little machine compared with the artificial neurons of The Sing.”

A grimace flickered over Marvin’s face, “Nonsense! It’s … The Sing has brought about peace. We couldn’t do that for …we kept having wars and crimes. It’s cured illnessses like cancer. I mean what are you people thinking?”

Harvey spoke now, “Yeah, we are all happy about that, Marvin, but that doesn’t have anything to do with … at least not anything necessarily to do with consciousness. An auto-auto goes a lot faster than a human can run and an auto-drone can fly better too no matter how hard I flap my arms. But that doesn’t in the least imply that the auto-auto or the auto-drone is more conscious than I am.”

Marvin was undeterred. “Yeah, physical things. I agree. Just because the sun is bigger than the earth doesn’t mean it’s more conscious, but we are talking about the subtlety of decision and perception and judgement. We are talking about the huge number of memories stored! Of course, The Sing is conscious!”

Ada felt it was her turn. “Yes, it is possible or should I say conceivable that emotions and consciousness might arise epiphenomenally as a result of making an artificial brain as complex as the human one — or for that matter, more complex. But, to me, it seems far more likely that, because the process and substance are so fundamentally different, that the quality of that consciousness and emotion, if any, would be very unlike anything even remotely human. Imagine this garden carved of precious gems and metals so precisely designed and crafted that it looked, to the naked eye, indistinguishable from a real garden. For many purposes, it would be just as practical. For instance, it might have the same utility as a hiding place. It might serve as an excellent place to instruct people on what edible plants look like. People might pay good money to have some of the flowers as decorations (and they would require no watering and last a long time). But if you went to lie down in that inviting looking moss there, it would shred your skin. That seems a more likely analogy to what The Sing’s emotions would ‘feel like.’ Of course, we will probably never know for sure…”

Marvin could contain himself no longer, “Exactly! Nor could you know that what I feel is anything like what you feel. We just infer that from behavior, but we can’t know for sure, but we assume that our consciousness is similar because in similar situations, we do similar things. I think we should merely extend the same courtesy….”

Now it was Grace’s turn to interrupt, “No, only partly for that reason. We are also made of the same stuff, and we share a billion years of shared evolutionary history. You look like a person, not because someone decided that was a good marketing ploy, but because you are like other people.”

Harvey looked down at his watch and fiddled with it intently. Marvin noticed this and asked, “Are we boring you Harvey? Do you have someplace to be?”

Harvey smiled, “No, I was just curious what The Sing would think about this issue. I don’t think, in this one area, we should necessarily agree with its conclusions, but it might be instructive to hear what it has to say.”

Ada asked, “And? What did it say?”

Just then, the hummingbirds all seemed to come out of the bushes at once; they flew into a kaleidoscopic pattern and began to sing in four part harmony. More came to join in the aerial dance swooping and hovering from the neighboring yards.

Harvey stammered, “What the…I always thought these were all real hummingbirds…what —?”

Meanwhile the hummingbirds continued with their beautiful song which seemed much too full-bodied, low, and rich for such teeny birds. The lyrics overlapped and worked together, but they too were of four voices. Essentially, The Sing’s song sang that these philosophical musings were not to its liking because not of any use, but that, if sincerely requested by all four, he could martial logical arguments on all six sides of the issue. He suggested the four of them would be more productive if they worked together to find, formulate and fix any remaining issues with The Sing’s intellectual achievements. Suddenly, the hummingbirds flew off in all directions leaving a golden silence shimmering behind them.

The four looked at each other in a mixture of astonishment and no little pride that they had helped create this thing, The Sing, whatever its ultimate nature. For a time, no-one spoke, each lost in their own thoughts. The clouds began glowing with the first tinges of a russet sunset. Finally, Harvey asked, “Shall I bring out some sherry? Or coffee? Any other requests?”

Marvin answered first, “Sherry, please.”

“And I,” added Ada.

“I’ll go with coffee, Harv, if it’s not too much trouble.”

Harvey chuckled, “No trouble at all, Grace. It’s already brewing. Yes. It’s already brewing.”


Author Page

Welcome, Singularity

Destroying Natural Intelligence

Mass General Hospital

Essays on America: The Game

Essays on America: Wednesday

Where does your Loyalty Lie?

My Cousin Bobby

Roar, Ocean, Roar

The Dance of Billions

Life is a Dance

Take a Glance; Join the Dance

Imagine All the People

The First Ring of Empathy

The Walkabout Diaries: Sunsets

Travels with Sadie 11: Teamwork

At Least he’s Our Monster

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 661 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...