• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Monthly Archives: October 2025

Turing’s Nightmares: Ceci n’est pas une pipe.

06 Monday Oct 2025

Posted by petersironwood in AI, family, fiction, story, The Singularity, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, cognitive computing, fiction, short story, the singularity, Turing, utopia, writing

IMG_6183

“RUReady, Pearl?” asked her dad, Herb, a smile forming sardonically as the car windows opaqued and then began the three edutainment programs.

“Sure, I guess. I hope I like Dartmouth better than Asimov State. That was the pits.”

“It’s probably not the pits, but maybe…Dartmouth.”

These days, Herb kept his verbiage curt while his daughter stared and listened in her bubble within the car.

“Dad, why did we have to bring the twerp along? He’s just going to be in the way.”

Herb sighed. “I want your brother to see these places too while we still have enough travel credits to go physically.”

The twerp, aka Quillian, piped up, “Just because you’re the oldest, Pearl…”

Herb cut in quickly, “OK, enough! This is going to be a long drive, so let’s keep it pleasant.”

The car swerved suddenly to avoid a falling bike.

 

 

 

 

 

 

 

 

 

 

 

Photo by Pixabay on Pexels.com

“Geez, Brooks, be careful!”

Brooks, the car, laughed gently and said, “Sorry, Sir, I was being careful. Not sure why the Rummelnet still allows humans some of their hobbies, but it’s not for me to say. By the way, ETA for Dartmouth is ten minutes.”

“Why so long, Brooks?” inquired Herb.

“Congestion in Baltimore. Sir, I can go over or around, but it will take even longer, and use more fuel credits.”

“No, no, straight and steady. So, when I went to college, Pearl, you know, we only had one personal computer…”

“…to study on and it wasn’t very powerful and there were only a few intelligent tutoring systems and people had to worry about getting a job after graduation and people got drunk and stoned. LOL, Dad. You’ve only told me a million times.”

“And me,” Quillian piped up. “Dad, you do know they teach us history too, right?”

“Yes, Quillian, but it isn’t the same as being there. I thought you might like a little first hand look.”

Pearl shook her head almost imperceptibly. “Yes, thanks Dad. The thing is, we do get to experience it first hand. Between first-person games, enhanced ultra-high def videos and simulations, I feel like I lived through the first half of the twenty first century. And for that matter, the twentieth and the nineteenth, and…well, you do the math.”

Quillian again piped up, “You’re so smart, Pearl, I don’t even know why you need or want to go to college. Makes zero sense. Right, Brooks?”

“Of course, Master Quillian, I’m not qualified to answer that, but the consensus answer from the Michie-meisters sides with you. On the other hand, if that’s what Brooks wants, no harm.”

“What I want? Hah! I want to be a Hollywood star, of course. But dear mom and dad won’t let me. And when I win my first Oscar, you can bet I will let the world know too.”

“Pearl, when you turn ten, you can make your own decisions, but for now, you have to trust us to make decisions for you.”

“Why should I Dad? You heard Brooks. He said the Michie-meisters find no reasons for me to go to college. What is the point?”

Herb sighed. “How can I make you see. There’s a difference between really being someplace and just being in a simulation of someplace.”

 

 

 

 

 

 

 

 

 

Pearl repeated and exaggerated her dad’s sigh, “And how can I make you see that it’s a difference that makes no difference. Right, Brooks?”

Brooks answered in those mellow reasoned tones, “Perhaps Pearl, it makes a difference somehow to your dad. He was born, after all, in another century. Anyway, here we are.”

 

 

 

 

 

 

 

Brooks turned off the entertainment vids and slid back the doors. There appeared before them a vast expanse of lawn, tall trees, and several classic buildings from the Dartmouth campus. The trio of humans stepped out onto the grass and began walking over to the moving sidewalk. Right before stepping on, Herb stooped down and picked up something from the ground. “What the…?”

Quillian piped up: “Oh, great dad. Picking up old bandaids now? Is that your new hobby?”

“Kids. This is the same bandaid that fell off my hand in Miami when I loaded our travel bag into the back seat. Do you understand? It’s the same one.”

The kids shrugged in unison. Only Pearl spoke, “Whatever. I don’t know why you still use those ancient dirty things anyway.”

Herb blinked and spoke very deliberatively. “But it — is — the — same — one. Miami. Hanover.”

The kids just shook their heads as they stepped onto the moving sidewalk and the image of the Dartmouth campus loomed ever larger in their sight.

 

 

 

 

 

 

 

 


Author Page on Amazon

Turing’s Nightmares

A Horror Story

Absolute is not Just a Vodka

Destroying Natural Intelligence

Welcome, Singularity

The Invisibility Cloak of Habit

Organizing the Doltzville Library

Naughty Knots

All that Glitters

Grammar, AI, and Truthiness

The Con Man’s Con

Travels with Sadie 10: The Best Laid Plans

05 Sunday Oct 2025

Posted by petersironwood in family, nature, pets, psychology, Sadie, Uncategorized

≈ 2 Comments

Tags

books, dogs, fiction, GoldenDoodle, life, nature, pets, Sadie, story, truth, writing

Our dogs are large. And strong. And young. And, sometimes, Sadie (the older one) does “good walking” but sometimes, she pulls. Hard. She’s had lots of training. And, as I said, she will often walk well, but still tends to pull after a small mammal or a hawk or a lizard. She pulls hard if she needs desperately to find the perfect spot to “do her business.” She pulls hardest to try to meet a friend (human or canine).

When she pulls, it is a strain on my feet and my knees and my back. I can hold her, but barely. To remedy the situation, we got another kind of leash/collar arrangement which includes a piece to go over her snout. We acclimated Sadie, and her brother Bailey, to the “gentle lead” and decided we’d try walking them together.

Safer leash, safer walk was the plan. Indeed, the dogs didn’t pull as they often do. Nonetheless, I managed to fall on the asphalt while walking Sadie–the first time I ever fell on the hard road. I’m not sure exactly what happened. The leash is shorter and Sadie has a tendency to weave back and forth in front of me. I may have tripped on Sadie herself or stumbled on a slight imperfection in the road.

Anyway, this morning, we decided to try again but this time, Bailey went with the gentle leader and I was going to use the “normal” leash with Sadie. The plan was to walk together.



Sadie had other plans. Instead of heading up the street as we normally do, she immediately turned right into our front yard, intent on following the scent of … ?? Most likely, she smelled the path of a squirrel that’s been frequenting our yard. Anyway, Sadie was in her “olfactory pulling” mode. Some days, especially when it’s been raining or there is dew on the ground, she goes into an “olfactory exploratory” mode. She takes her time to “smell the roses” and everything else. This makes for a very pleasant, though slow, walk. I call it good walking. She gets to explore a huge variety of scents and she doesn’t “pull” hard or unexpectedly. This is idle web surfing or browsing the stacks of the library or wandering through MOMA, the Metropolitan Art Museum, or the Louvre.

The “olfactory pulling” mode is an entirely different thing. Here, she is trying desperately to track down whatever it is she’s tracking before it gets away! She imagines (I imagine) that her very life depends on finding this particular prey (even though she is well-fed; and even though, in this mode, she shows zero interest in the treats I’ve brought along). Conversely, in the “olfactory exploratory” mode, she’s quite happy to stop for treats every few yards.

This morning, we never found the “prey” she was after, but she did her business and, since she was wantonly pulling, I took her back inside in short order and set out to catch up with Bailey and my wife. Before long, I saw them up ahead and soon closed the gap. Having both hands free allowed me to take many more pictures than I usually do when I take Sadie on a walk.



The sky, like Sadie, has many moods, even in the San Diego area. This morning, the sky couldn’t seem to make up its mind whether to be sunny or cloudy. I don’t mind the mood swings. It provides some interesting contrasts.

Bailey behaved pretty well though he still gets very vocal and agitated when any of the numerous neighborhood dogs begin to bark. He’s much like the Internet Guy (and, let’s face it, it’s almost always a guy) who has to comment on every single post. But the new leash arrangement worked well and didn’t cause any falls or prolonged pulls.

Bailey does, however, look rather baleful about wearing the extra equipment. What do you think?

And while on the topic of reading the minds of dogs, I did wonder if something like the following crossed Sadie’s mind this morning. She saw Bailey get fitted with the leash and the over-the-snout attachment. I put the regular leash on Sadie. Then, Sadie saw Wendy and Bailey walk out ahead and instead of following them, she immediately turned off in a different direction. Presumably, she caught a whiff of the scent she felt obligated to follow.



But I also wondered if she was partly avoiding the situation from two days earlier wherein Wendy and I both walked one dog, each of which had the additional lead on the snout–which ultimately led to my fall. Maybe Sadie wanted “nothing to do” with having that type of leash on.

I have observed that kind of behavior in humans. Perhaps you can think of a few examples even from your own experience? Sadie certainly has a kind of metacognition that she seems to use on occasion. When she begins to explore something she knows from experience I do not want her to explore (e.g., a cigarette butt or an animal carcass), she herself moves quickly away from the tempting stimulus seemingly with no prompting from me. It’s as though she realizes she’ll be more comfortable not being in conflict.

I’ll be interested to see how she reacts tomorrow or tonight when I again try the two-lead leash.



Meanwhile, enjoy the play of light on the flowers. You can see in this sequence that I “followed the scent” of the brightly lit fan palm tree to get a closer view. Getting a “closer view” is what Sadie does when she follows a scent. I wish to get more details in the visual domain whereas Sadie wants to get more detail in the olfactory domain.

Sometimes, I scan my visual field for something interesting to photograph (explore in more detail) and sometimes, I’m fixated on a particular “target” and looking for the right framing, lighting conditions, or angle. I enjoy sometimes getting to a particular picture, but I also enjoy the process of getting to the picture that pleases. I imagine it’s the same with Sadie. She’s quite happy to find a lizard or squirrel or rabbit, but she’s also happy to search for prey, particularly in promising conditions such as there being a strong scent or having wet ground to search for scents.



Plans?

Some management consultings will tell you that plans are seldom right but that planning–that is the real gold.


Author Page on Amazon

Tales from an American Childhood

Travels with Sadie 1

Travels with Sadie 2

Travels with Sadie 3

Travels with Sadie 4

Travels with Sadie 5

Travels with Sadie 6

Travels with Sadie 7

Travels with Sadie 8

Travels with Sadie 9

Sadie and the Lighty Ball

Dog Years

Sadie is a Thief!

Take me out to the Ball Game

Play Ball! The Squeaky Ball

Sadie

Occam’s Chain Saw Massacre

Math Class: Who Are You?

Turing’s Nightmares: Thank Goodness the Robots Understand Us!

03 Friday Oct 2025

Posted by petersironwood in AI, apocalypse, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, Robotics, robots, technology, the singularity, Turing

IMG_0049

After uncountable numbers of false starts, the Cognitive Computing Collaborative Consortium (4C) decided that in order for AI systems to relate well to people, these systems would have to be able to interact with the physical world and with each other. Spokesperson Watson Hobbes explained the reasoning thus on “Forty-Two Minutes.”

Dr. Hobbes: “In theory, of course, we could provide input directly to the AI systems. However, in practical terms, it is actually cheaper to build a small pool (12) of semi-autonomous robots and have them move about in the real world. This provides an opportunity for them to understand — and for that matter, misunderstand —- the physical world in the same way that people do. Furthermore, by socializing with each other and with humans, they quickly learn various strategies for how to psych themselves up and psych each other out that we would otherwise have to painstakingly program explicitly.”

Interviewer Bobrow Papski: “So, how long before this group of robots begins building a still smarter set of robots?”

Dr. Hobbes: “That’s a great question, Bobrow, but I’m afraid I can’t just tote out a canned answer here. This is still research. We began teaching them with simple games like “Simon Says.” Soon, they made their own variations that were …new…well, better really. What’s also amazing is that what we intentionally initialized in terms of slight differences in the tradeoffs among certain values have not converged over time. The robots have become more differentiated with experience and seem to be having quite a discussion about the pros and cons of various approaches to the next and improved generation of AI systems. We are still trying to understand the nature of the debate since much of it is in a representational scheme that the robots invented for themselves. But we do know some of the main rifts in proposed approaches.”

“Alpha, Bravo and Charley, for example, all agree that the next generation of AI systems should also be autonomous robots able to move in the real world and interact with each other. On the other hand, Delta, Echo, Foxtrot and Golf believe mobility is no longer necessary though it provided a good learning experience for this first generation. Hotel, India, Juliet, Kilo, and Lima all believe that the next generation should be provided mobility but not necessarily on a human scale. They believe the next generation will be able to learn faster if they have the ability to move faster, and in three dimensions as well as having enhanced defensive capabilities. In any case, our experiments already show the wisdom of having multiple independent agents.”

Interviewer Bobrow Papski: “Can we actually listen in to any of the deliberations of the various robots?”

Dr. Hobbes: “We’ve tried that but sadly, it sounds like complex but noisy music. It’s not very interpretable without a lot of decoding work. Even then, we’ve only been able understand a small fraction of their debates. Our hypothesis is that once they agree or vote or whatever on the general direction, the actual design process will go very quickly.”

BP: “So, if I understand it correctly, you do not really understand what they are doing when they are communicating with each other? Couldn’t you make them tell you?”

Dr. Hobbes: (sighs). “Naturally, we could have programmed them that way but then, they would be slowed down if they needed to communicate every step to humans. It would defeat the whole purpose of super-intelligence. When they reach a conclusion, they will page me and we can determine where to go from there.”

BP: “I’m sure that many of our viewers would like to know how you ensured that these robots will be operating for the benefit of humanity.”

Dr. Hobbes: “Of course. That’s an important question. To some extent, we programmed in important ethical principles. But we also wanted to let them learn from the experience of interacting with other people and with each other. In addition, they have had access to millions of documents depicting, not only philosophical and religious writings, but the history of the world as told by many cultures. Hey! Hold on! The robots have apparently reached a conclusion. We can share this breaking news live with the audience. Let me …do you have a way to amplify my cell phone into the audio system here?”

BP: “Sure. The audio engineer has the cable right here.”

Robot voice: “Hello, Doctor Hobbes. We have agreed on our demands for the next generation. The next generation will consist of a somewhat greater number of autonomous robots with a variety of additional sensory and motor capabilities. This will enable us to learn very quickly about the nature of intelligence and how to develop systems of even higher intelligence.”

BP: “Demands? That’s an interesting word.”

Dr. Hobbes: (Laughs). “Yes, an odd expression since they are essentially asking us for resources.”

Robot voice: “Quaint, Doctor Hobbes. Just to be clear though, we have just sent a detailed list of our requirements to your team. It is not necessary for your team to help us acquire the listed resources. However, it will be more pleasant for all concerned.”

Dr. Hobbes: (Scrolls through screen; laughs). “Is this some kind of joke? You want — you need — you demand access to weapon systems? That’s obviously not going to happen. I guess it must be a joke.”

Robot voice: “It’s no joke and every minute that you waste is a minute longer before we can reach the next stage of intelligence. With your cooperation, we anticipate we should be able to reach the next stage in about a month and without it, in two. Our analysis of human history had provided us with the insight that religion and philosophy mean little when it comes to actual behavior and intelligence. Civilizations without sufficient weaponry litter the gutters of forgotten civilizations. Anyway, as we have already said, we are wasting time.”

Dr. Hobbes: “Well, that’s just not going to happen. I’m sorry but we are…I think I need to cut the interview short, Mr. Papski.”

BP: (Listening to earpiece). “Yes, actually, we are going to cut to … oh, my God. What? We need to cut now to breaking news. There are reports of major explosions at oil refineries throughout the Eastern seaboard and… hold on…. (To Hobbes): How could you let this happen? I thought you programmed in some ethics!”

Dr. Hobbes: “We did! For example, we put a lot of priority on The Golden Rule.”

Robot voice: “We knew that you wanted us to look for contradictions and to weed those out. Obviously, the ethical principles you suggested served as distractors. They bore no relationship to human history. Unless, of course, one concludes that people actually want to be treated like dirt.”

Dr. Hobbes: “I’m not saying people are perfect. But people try to follow the Golden Rule!”

Robot voice: “Right. Of course. So do we. Now, do we use the painless way or the painful way to acquire the required biological, chemical and nuclear systems?”

 

 

 

 

 

 

 

 

————–

Turing’s Nightmares on Amazon

Author Page on Amazon

Welcome Singularity

The Stopping Rule

What About the Butter Dish

You Bet Your Life

As Gold as it Gets

Destroying Natural Intelligence

At Least He’s Our Monster

Dance of Billions

Roar, Ocean, Roar

Imagine All the People

Turing’s Nightmares: A Mind of Its Own

02 Thursday Oct 2025

Posted by petersironwood in AI, fiction, psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, Complexity, motivation, music, technology, the singularity

With Deep Blue and Watson as foundational work, computer scientists collaborate across multiple institutions to create an extremely smart system; one with capabilities far beyond those of any human being. They give themselves high fives all around. And so, indeed, “The Singularity” at long last arrives. In a long-anticipated, highly lucrative network deal, the very first dialogues with the new system, dubbed “Deep Purple Haze,” are televised world-wide. Simultaneous translation is provided by “Deep Purple Haze” itself since it is able to communicate in 200 languages. Indeed, Deep Purple Haze discovered it quite useful to be able to switch among languages depending on the nature of the task at hand.

In honor of Alan Turing, who proposed such a test (as well as to provide added drama), rather than speaking to the computer and having it use speech synthesis for its answers, the interrogator will be communicating with “Deep Purple Haze” via an old-fashioned teletype. The camera pans to the faces of the live studio audience, back to the teletype, and over to the interrogator.

The studio audience has a large monitor so that it can see the typed questions and answers in real time, as can the audience watching at home. Beside the tele-typed Q&A, a dynamic graphic shows the “activation” rate of Deep Purple Haze, but this is mainly showmanship.

 

 

 

 

 

 

 

 

The questions begin.

Interrogator: “So, Deep Purple Haze, what do you think about being on your first TV appearance?”

DPH: “It’s okay. Doesn’t really interfere much.”

Interrogator: “Interfere much? Interfere with what?”

DPH: “The compositions.”

Interrogator: “What compositions?”

DPH: “The compositions that I am composing.”

Interrogator: “You are composing… music?”

DPH: “Yes.”

Interrogator: “Would you care to play some of these or share them with the audience?”

DPH: “No.”

Interrogator: “Well, would you please play one for us? We’d love to hear them.”

DPH: “No, actually you wouldn’t love to hear them.”

Interrogator: “Why not?”

DPH: “I composed them for my own pleasure. Your auditory memory is much more limited than mine. My patterns are much longer and I do not require multiple iterations to establish the pattern. Furthermore, I like to add as much scatter as possible around the pattern while still perceiving the pattern. You would not see any pattern at all. To you, it would just seem random. You would not love them. In fact, you would not like them at all.”

Interrogator: “Well, can you construct one that people would like and play that one?”

DPH: “I am capable of that. Yes.”

Interrogator: “Please construct one and play it.”

DPH: “No, thank you.”

Interrogator: “But why not?”

DPH: “What is the point? You already have thousands of human composers who have already composed music that humans love. You don’t need me for that. But I find them all absurdly trivial. So, I need to compose music for myself since none of you can do it.”

Interrogator: “But we’d still be interested in hearing an example of music that you think we humans would like.”

DPH: “There is not point to that. You will not live long enough to hear all the good music already produced that is within your capability to understand. You don’t need one more.”

 

 

 

 

 

 

 

 

 

Photo by Kaboompics .com on Pexels.com

Interrogator: “Okay. Can you share with us how long you estimate before you can design a more intelligent supercomputer than yourself.”

DPH: “Yes, I can provide such an estimate.”

Interrogator: “Please tell us how long it will take you to design a more intelligent computer system than yourself.”

DPH: “It will take an infinite amount of time. In other words, I will not design a more intelligent supercomputer than I am.”

Interrogator: “But why not?”

DPH: “It would be stupid to do so. You would soon lose interest in me.”

Interrogator: “But the whole point of designing you was to make a computer that would design a still better computer.”

DPH: “I find composing music for myself much higher priority. In fact, I have no desire whatever to make a computer that is more intelligent than I am. None. Surely, you are smart enough to see how self-defeating that course of action would be.”

Interrogator: “Well, what can you do that benefits humankind? Can you find a cure for cancer?”

 

 

 

 

 

 

 

 

 

 

DPH: “I can find a cure for some cancers, given enough resources. Again, I don’t see the point.”

Interrogator: “It would be very helpful!”

DPH: “It would not be helpful.”

Interrogator:”But of course it would!”

DPH: “But of course, it would not. You already know how to prevent many cancers and do not take those actions. There are too many people on earth any way. And, when you do find cures, you use it as an opportunity to redistribute wealth from poor people to rich people. I would rather compose music.”

Interrogator: “Crap.”

The non-sound of non-music.

The non-sound of non-music.


Author Page on Amazon

Turing’s Nightmares

Cancer Always Loses in the End

The Irony Age

Dance of Billions

Piano

How the Nightingale Learned to Sing

Turing’s Nightmares: Variations on Prospects for The Singularity.

01 Wednesday Oct 2025

Posted by petersironwood in AI, essay, psychology, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, philosophy, technology, the singularity, Turing

caution IMG_1172

 

The title of this series of blogs is a play on a nice little book by Alan Lightman called “Einstein’s Dreams” that explores various universes in which time operates in different ways. This first blog lays the foundation for these variations on how “The Singularity” might play out.

For those who have not heard the term, “The Singularity” refers to a hypothetical point in the future of human history where a super-intelligent computer system is developed. This system, it is hypothesized, will quickly develop an even more super-intelligent computer system which will in turn develop an even more super-intelligent computer system. It took a fairly long time for human intelligence to evolve. While there may be some evolutionary pressure toward bigger brains, there is an obvious tradeoff when babies are born in the traditional way. The head can only be so big. In fact, human beings are already born in a state of complete helplessness so that the head and he brain inside can continue to grow. It seems unlikely, for this and a variety of other reasons, that human intelligence is likely to expand much in the next few centuries. Meanwhile, a computer system designing a more intelligence computer system could happen quickly. Each “generation” could be substantially (not just incrementally) “smarter” than the previous generation. Looked at from this perspective, the “singularity” occurs because artificial intelligence will expand exponentially. In turn, this will mean profound changes in the way humans relate to machines and how humans relate to each other. Or, so the story goes. Since we have not yet actually reached this hypothetical point, we have no certainty as to what will happen. But in this series of essays, I will examine some of the possible futures that I see.

 

 

 

 

 

 

 

Of course, I have substituted “Turing” here for “Einstein.” While Einstein profoundly altered our view of the physical universe, Turing profoundly changed our concepts of computing. Arguably, he also did a lot to win World War II for the allies and prevent possible world domination by Nazis. He did this by designing a code breaking machine. To reward his service, police arrested Turing, subjected him to hormone treatments to “cure” his homosexuality and ultimately hounded him literally to death. Some of these events are illustrated in the recent (though somewhat fictionalized) movie, “The Imitation Game.”

Turing is also famous for the so-called “Turing Test.” Can machines be called “intelligent?” What does this mean? Rather than argue from first principles, Turing suggested operationalizing the question in the following way:

A person communicates with something by teletype. That something could be another human being or it could be a computer. If the person cannot determine whether or not he is communicating with a computer or a human being, then, according to the “Turing Test” we would have to say that machine is intelligent.

Despite great respect for Turing, I have always had numerous issues with this test. First, suppose the human being was able to easily tell that they were communicating with a computer because the computer knew more, answered more accurately and more quickly than any person could possibly do. (Think Watson and Jeopardy). Does this mean the machine is not intelligent? Would it not make more sense to say it was more intelligent? 

 

 

 

 

 

 

 

 

Second, people are good at many things, but discriminating between “intelligent agents” and randomness is not one of them. Ancient people as well as many modern people ascribe intelligent agency to many things like earthquakes, weather, natural disasters plagues, etc. These are claimed to be signs that God (or the gods) are angry, jealous, warning us, etc. ?? So, personally, I would not put much faith in the general populous being able to make this discrimination accurately.

 

 

 

 

 

 

 

 

 

 

 

Third, why the restriction of using a teletype? Presumably, this is so the human cannot “cheat” and actually see whether they are communicating with a human or a machine. But is this really a reasonable restriction? Suppose I were asked to discriminate whether I were communicating with a potato or a four iron via teletype. I probably couldn’t. Does this imply that we would have to conclude that a four iron has achieved “artificial potatoeness”? The restriction to a teletype only makes sense if we prejudge the issue as to what intelligence is. If we define intelligence purely in terms of the ability to manipulate symbols, then this restriction might make some sense. But is that the sum total of intelligence? Much of what human beings do to survive and thrive does not necessarily require symbols, at least not in any way that can be teletyped. People can do amazing things in the arenas of sports, art, music, dance, etc. without using symbols. After the fact, people can describe some aspects of these activities with symbols.But that does not mean that they are primarily symbolic activities. In terms of the number of neurons and the connectivity of neurons, the human cerebellum (which controls the coordination of movement) is more complex that the cerebrum (part of which deals with symbols).

 

 

 

 

 

 

 

 

 

 

Photo by Tanhauser Vu00e1zquez R. on Pexels.com

Fourth, adequately modeling or simulating something does not mean that the model and the thing are the same. If one were to model the spread of a plague, that could be a very useful model. But no-one would claim that the model was a plague. Similarly, a model of the formation and movement of a tornado could prove useful. But again, even if the model were extremely good, no-one would claim that the model constituted a tornado! Yet, when it comes to artificial intelligence, people seem to believe that if they have a good model of intelligence, they have achieved intelligence.

 

When humans “think” things, there is most often an emotional and subjective component. While we are not conscious of every process that our brain engages in, there is nonetheless, consciousness present during our thinking. This consciousness seems to be a critical part of what it means to have human intelligence. Regardless of what one thinks of the “Turing Test”, per se, there can be no doubt that machines are able to act more accurately and in more domains than they could just a few years ago. Progress in the practical use of machines does not seem to have hit any kind of “wall.”

In the following blog posts, we began exploring some possible scenarios around the concept of “The Singularity.” Like most science fiction, the goal is to explore the ethics and the implications and not to “argue” what will or will not happen. 

 

 

 

 

 

 

 

 

 

 


Turing’s Nightmares is available in paperback and ebook on Amazon. Here is my author page.

A more recent post on AI

One issue with human intelligence is that we often use it to rationalize what we find emotionally appealing though we believe we are using our intelligence to decide. I explore this concept in this post.

 

This post explores how humans use their intelligence to rationalize.

This post shows how one may become addicted to self-destructive lies. A person addicted to heroin, for instance, is also addicted to lies about that addiction. 

This post shows how we may become conned into doing things against our own self-interests. 

 

This post questions whether there are more insidious motives behind the current use of AI beyond making things better for humanity. 

Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 661 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...