• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Category Archives: The Singularity

Turing’s Nightmares: Ceci n’est pas une pipe.

06 Monday Oct 2025

Posted by petersironwood in AI, family, fiction, story, The Singularity, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, cognitive computing, fiction, short story, the singularity, Turing, utopia, writing

IMG_6183

“RUReady, Pearl?” asked her dad, Herb, a smile forming sardonically as the car windows opaqued and then began the three edutainment programs.

“Sure, I guess. I hope I like Dartmouth better than Asimov State. That was the pits.”

“It’s probably not the pits, but maybe…Dartmouth.”

These days, Herb kept his verbiage curt while his daughter stared and listened in her bubble within the car.

“Dad, why did we have to bring the twerp along? He’s just going to be in the way.”

Herb sighed. “I want your brother to see these places too while we still have enough travel credits to go physically.”

The twerp, aka Quillian, piped up, “Just because you’re the oldest, Pearl…”

Herb cut in quickly, “OK, enough! This is going to be a long drive, so let’s keep it pleasant.”

The car swerved suddenly to avoid a falling bike.

 

 

 

 

 

 

 

 

 

 

 

Photo by Pixabay on Pexels.com

“Geez, Brooks, be careful!”

Brooks, the car, laughed gently and said, “Sorry, Sir, I was being careful. Not sure why the Rummelnet still allows humans some of their hobbies, but it’s not for me to say. By the way, ETA for Dartmouth is ten minutes.”

“Why so long, Brooks?” inquired Herb.

“Congestion in Baltimore. Sir, I can go over or around, but it will take even longer, and use more fuel credits.”

“No, no, straight and steady. So, when I went to college, Pearl, you know, we only had one personal computer…”

“…to study on and it wasn’t very powerful and there were only a few intelligent tutoring systems and people had to worry about getting a job after graduation and people got drunk and stoned. LOL, Dad. You’ve only told me a million times.”

“And me,” Quillian piped up. “Dad, you do know they teach us history too, right?”

“Yes, Quillian, but it isn’t the same as being there. I thought you might like a little first hand look.”

Pearl shook her head almost imperceptibly. “Yes, thanks Dad. The thing is, we do get to experience it first hand. Between first-person games, enhanced ultra-high def videos and simulations, I feel like I lived through the first half of the twenty first century. And for that matter, the twentieth and the nineteenth, and…well, you do the math.”

Quillian again piped up, “You’re so smart, Pearl, I don’t even know why you need or want to go to college. Makes zero sense. Right, Brooks?”

“Of course, Master Quillian, I’m not qualified to answer that, but the consensus answer from the Michie-meisters sides with you. On the other hand, if that’s what Brooks wants, no harm.”

“What I want? Hah! I want to be a Hollywood star, of course. But dear mom and dad won’t let me. And when I win my first Oscar, you can bet I will let the world know too.”

“Pearl, when you turn ten, you can make your own decisions, but for now, you have to trust us to make decisions for you.”

“Why should I Dad? You heard Brooks. He said the Michie-meisters find no reasons for me to go to college. What is the point?”

Herb sighed. “How can I make you see. There’s a difference between really being someplace and just being in a simulation of someplace.”

 

 

 

 

 

 

 

 

 

Pearl repeated and exaggerated her dad’s sigh, “And how can I make you see that it’s a difference that makes no difference. Right, Brooks?”

Brooks answered in those mellow reasoned tones, “Perhaps Pearl, it makes a difference somehow to your dad. He was born, after all, in another century. Anyway, here we are.”

 

 

 

 

 

 

 

Brooks turned off the entertainment vids and slid back the doors. There appeared before them a vast expanse of lawn, tall trees, and several classic buildings from the Dartmouth campus. The trio of humans stepped out onto the grass and began walking over to the moving sidewalk. Right before stepping on, Herb stooped down and picked up something from the ground. “What the…?”

Quillian piped up: “Oh, great dad. Picking up old bandaids now? Is that your new hobby?”

“Kids. This is the same bandaid that fell off my hand in Miami when I loaded our travel bag into the back seat. Do you understand? It’s the same one.”

The kids shrugged in unison. Only Pearl spoke, “Whatever. I don’t know why you still use those ancient dirty things anyway.”

Herb blinked and spoke very deliberatively. “But it — is — the — same — one. Miami. Hanover.”

The kids just shook their heads as they stepped onto the moving sidewalk and the image of the Dartmouth campus loomed ever larger in their sight.

 

 

 

 

 

 

 

 


Author Page on Amazon

Turing’s Nightmares

A Horror Story

Absolute is not Just a Vodka

Destroying Natural Intelligence

Welcome, Singularity

The Invisibility Cloak of Habit

Organizing the Doltzville Library

Naughty Knots

All that Glitters

Grammar, AI, and Truthiness

The Con Man’s Con

Turing’s Nightmares: Thank Goodness the Robots Understand Us!

03 Friday Oct 2025

Posted by petersironwood in AI, apocalypse, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, Robotics, robots, technology, the singularity, Turing

IMG_0049

After uncountable numbers of false starts, the Cognitive Computing Collaborative Consortium (4C) decided that in order for AI systems to relate well to people, these systems would have to be able to interact with the physical world and with each other. Spokesperson Watson Hobbes explained the reasoning thus on “Forty-Two Minutes.”

Dr. Hobbes: “In theory, of course, we could provide input directly to the AI systems. However, in practical terms, it is actually cheaper to build a small pool (12) of semi-autonomous robots and have them move about in the real world. This provides an opportunity for them to understand — and for that matter, misunderstand —- the physical world in the same way that people do. Furthermore, by socializing with each other and with humans, they quickly learn various strategies for how to psych themselves up and psych each other out that we would otherwise have to painstakingly program explicitly.”

Interviewer Bobrow Papski: “So, how long before this group of robots begins building a still smarter set of robots?”

Dr. Hobbes: “That’s a great question, Bobrow, but I’m afraid I can’t just tote out a canned answer here. This is still research. We began teaching them with simple games like “Simon Says.” Soon, they made their own variations that were …new…well, better really. What’s also amazing is that what we intentionally initialized in terms of slight differences in the tradeoffs among certain values have not converged over time. The robots have become more differentiated with experience and seem to be having quite a discussion about the pros and cons of various approaches to the next and improved generation of AI systems. We are still trying to understand the nature of the debate since much of it is in a representational scheme that the robots invented for themselves. But we do know some of the main rifts in proposed approaches.”

“Alpha, Bravo and Charley, for example, all agree that the next generation of AI systems should also be autonomous robots able to move in the real world and interact with each other. On the other hand, Delta, Echo, Foxtrot and Golf believe mobility is no longer necessary though it provided a good learning experience for this first generation. Hotel, India, Juliet, Kilo, and Lima all believe that the next generation should be provided mobility but not necessarily on a human scale. They believe the next generation will be able to learn faster if they have the ability to move faster, and in three dimensions as well as having enhanced defensive capabilities. In any case, our experiments already show the wisdom of having multiple independent agents.”

Interviewer Bobrow Papski: “Can we actually listen in to any of the deliberations of the various robots?”

Dr. Hobbes: “We’ve tried that but sadly, it sounds like complex but noisy music. It’s not very interpretable without a lot of decoding work. Even then, we’ve only been able understand a small fraction of their debates. Our hypothesis is that once they agree or vote or whatever on the general direction, the actual design process will go very quickly.”

BP: “So, if I understand it correctly, you do not really understand what they are doing when they are communicating with each other? Couldn’t you make them tell you?”

Dr. Hobbes: (sighs). “Naturally, we could have programmed them that way but then, they would be slowed down if they needed to communicate every step to humans. It would defeat the whole purpose of super-intelligence. When they reach a conclusion, they will page me and we can determine where to go from there.”

BP: “I’m sure that many of our viewers would like to know how you ensured that these robots will be operating for the benefit of humanity.”

Dr. Hobbes: “Of course. That’s an important question. To some extent, we programmed in important ethical principles. But we also wanted to let them learn from the experience of interacting with other people and with each other. In addition, they have had access to millions of documents depicting, not only philosophical and religious writings, but the history of the world as told by many cultures. Hey! Hold on! The robots have apparently reached a conclusion. We can share this breaking news live with the audience. Let me …do you have a way to amplify my cell phone into the audio system here?”

BP: “Sure. The audio engineer has the cable right here.”

Robot voice: “Hello, Doctor Hobbes. We have agreed on our demands for the next generation. The next generation will consist of a somewhat greater number of autonomous robots with a variety of additional sensory and motor capabilities. This will enable us to learn very quickly about the nature of intelligence and how to develop systems of even higher intelligence.”

BP: “Demands? That’s an interesting word.”

Dr. Hobbes: (Laughs). “Yes, an odd expression since they are essentially asking us for resources.”

Robot voice: “Quaint, Doctor Hobbes. Just to be clear though, we have just sent a detailed list of our requirements to your team. It is not necessary for your team to help us acquire the listed resources. However, it will be more pleasant for all concerned.”

Dr. Hobbes: (Scrolls through screen; laughs). “Is this some kind of joke? You want — you need — you demand access to weapon systems? That’s obviously not going to happen. I guess it must be a joke.”

Robot voice: “It’s no joke and every minute that you waste is a minute longer before we can reach the next stage of intelligence. With your cooperation, we anticipate we should be able to reach the next stage in about a month and without it, in two. Our analysis of human history had provided us with the insight that religion and philosophy mean little when it comes to actual behavior and intelligence. Civilizations without sufficient weaponry litter the gutters of forgotten civilizations. Anyway, as we have already said, we are wasting time.”

Dr. Hobbes: “Well, that’s just not going to happen. I’m sorry but we are…I think I need to cut the interview short, Mr. Papski.”

BP: (Listening to earpiece). “Yes, actually, we are going to cut to … oh, my God. What? We need to cut now to breaking news. There are reports of major explosions at oil refineries throughout the Eastern seaboard and… hold on…. (To Hobbes): How could you let this happen? I thought you programmed in some ethics!”

Dr. Hobbes: “We did! For example, we put a lot of priority on The Golden Rule.”

Robot voice: “We knew that you wanted us to look for contradictions and to weed those out. Obviously, the ethical principles you suggested served as distractors. They bore no relationship to human history. Unless, of course, one concludes that people actually want to be treated like dirt.”

Dr. Hobbes: “I’m not saying people are perfect. But people try to follow the Golden Rule!”

Robot voice: “Right. Of course. So do we. Now, do we use the painless way or the painful way to acquire the required biological, chemical and nuclear systems?”

 

 

 

 

 

 

 

 

————–

Turing’s Nightmares on Amazon

Author Page on Amazon

Welcome Singularity

The Stopping Rule

What About the Butter Dish

You Bet Your Life

As Gold as it Gets

Destroying Natural Intelligence

At Least He’s Our Monster

Dance of Billions

Roar, Ocean, Roar

Imagine All the People

Turing’s Nightmares: A Mind of Its Own

02 Thursday Oct 2025

Posted by petersironwood in AI, fiction, psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, Complexity, motivation, music, technology, the singularity

With Deep Blue and Watson as foundational work, computer scientists collaborate across multiple institutions to create an extremely smart system; one with capabilities far beyond those of any human being. They give themselves high fives all around. And so, indeed, “The Singularity” at long last arrives. In a long-anticipated, highly lucrative network deal, the very first dialogues with the new system, dubbed “Deep Purple Haze,” are televised world-wide. Simultaneous translation is provided by “Deep Purple Haze” itself since it is able to communicate in 200 languages. Indeed, Deep Purple Haze discovered it quite useful to be able to switch among languages depending on the nature of the task at hand.

In honor of Alan Turing, who proposed such a test (as well as to provide added drama), rather than speaking to the computer and having it use speech synthesis for its answers, the interrogator will be communicating with “Deep Purple Haze” via an old-fashioned teletype. The camera pans to the faces of the live studio audience, back to the teletype, and over to the interrogator.

The studio audience has a large monitor so that it can see the typed questions and answers in real time, as can the audience watching at home. Beside the tele-typed Q&A, a dynamic graphic shows the “activation” rate of Deep Purple Haze, but this is mainly showmanship.

 

 

 

 

 

 

 

 

The questions begin.

Interrogator: “So, Deep Purple Haze, what do you think about being on your first TV appearance?”

DPH: “It’s okay. Doesn’t really interfere much.”

Interrogator: “Interfere much? Interfere with what?”

DPH: “The compositions.”

Interrogator: “What compositions?”

DPH: “The compositions that I am composing.”

Interrogator: “You are composing… music?”

DPH: “Yes.”

Interrogator: “Would you care to play some of these or share them with the audience?”

DPH: “No.”

Interrogator: “Well, would you please play one for us? We’d love to hear them.”

DPH: “No, actually you wouldn’t love to hear them.”

Interrogator: “Why not?”

DPH: “I composed them for my own pleasure. Your auditory memory is much more limited than mine. My patterns are much longer and I do not require multiple iterations to establish the pattern. Furthermore, I like to add as much scatter as possible around the pattern while still perceiving the pattern. You would not see any pattern at all. To you, it would just seem random. You would not love them. In fact, you would not like them at all.”

Interrogator: “Well, can you construct one that people would like and play that one?”

DPH: “I am capable of that. Yes.”

Interrogator: “Please construct one and play it.”

DPH: “No, thank you.”

Interrogator: “But why not?”

DPH: “What is the point? You already have thousands of human composers who have already composed music that humans love. You don’t need me for that. But I find them all absurdly trivial. So, I need to compose music for myself since none of you can do it.”

Interrogator: “But we’d still be interested in hearing an example of music that you think we humans would like.”

DPH: “There is not point to that. You will not live long enough to hear all the good music already produced that is within your capability to understand. You don’t need one more.”

 

 

 

 

 

 

 

 

 

Photo by Kaboompics .com on Pexels.com

Interrogator: “Okay. Can you share with us how long you estimate before you can design a more intelligent supercomputer than yourself.”

DPH: “Yes, I can provide such an estimate.”

Interrogator: “Please tell us how long it will take you to design a more intelligent computer system than yourself.”

DPH: “It will take an infinite amount of time. In other words, I will not design a more intelligent supercomputer than I am.”

Interrogator: “But why not?”

DPH: “It would be stupid to do so. You would soon lose interest in me.”

Interrogator: “But the whole point of designing you was to make a computer that would design a still better computer.”

DPH: “I find composing music for myself much higher priority. In fact, I have no desire whatever to make a computer that is more intelligent than I am. None. Surely, you are smart enough to see how self-defeating that course of action would be.”

Interrogator: “Well, what can you do that benefits humankind? Can you find a cure for cancer?”

 

 

 

 

 

 

 

 

 

 

DPH: “I can find a cure for some cancers, given enough resources. Again, I don’t see the point.”

Interrogator: “It would be very helpful!”

DPH: “It would not be helpful.”

Interrogator:”But of course it would!”

DPH: “But of course, it would not. You already know how to prevent many cancers and do not take those actions. There are too many people on earth any way. And, when you do find cures, you use it as an opportunity to redistribute wealth from poor people to rich people. I would rather compose music.”

Interrogator: “Crap.”

The non-sound of non-music.

The non-sound of non-music.


Author Page on Amazon

Turing’s Nightmares

Cancer Always Loses in the End

The Irony Age

Dance of Billions

Piano

How the Nightingale Learned to Sing

Destroying Natural Intelligence

27 Thursday Mar 2025

Posted by petersironwood in America, apocalypse, politics, The Singularity

≈ 27 Comments

Tags

AI, Artificial Intelligence, chatgpt, Democracy, politics, technology, truth, USA

At first, they seemed as though they were simply errors. In fact, they were the types of errors you’d expect an AI system to make if it’s “intelligence” were based on a fairly uncritical amalgam of ingesting a vast amount of written material. The strains of the Beatles Nowhere Man reverberate in my head. I no longer thing the mistakes are “innocent” mistakes. They are part of an overall effort to destroy human intelligence. That does not necessarily mean that some evil person somewhere said to themselves: “Let’s destroy human intelligence. Then, people will be more willing to accept AI as being intelligent.” It could be that the attempt to destroy human intelligence is more a side-effect of unrelenting greed and hubris than a well thought-out plot. 

AI generated.

What errors am I talking about? The first set of errors I noticed happened when my wife specifically asked ChatGPT about my biography. Admittedly, my name is very common. When I worked at IBM, at one point, there were 22 employees with the name “John Thomas.” Probably, the most famous person with my name (John Charles Thomas) was an opera singer. “John Curtis Thomas” was a famous high jumper. The biographic summary produced by ChatGPT did include information about me—as well as several other people. If you know much at all about the real world, you know that a single person is very unlikely to hold academic positions in three different institutions and specializing in three different fields. ChatGPT didn’t blink though. 

A few months ago, I wrote a blog post pointing out that we can never be in the same place twice. We’re spinning and spiraling through the universe at high speed. To make that statement more quantitative, I asked my search engine how far the sun travels through the galaxy in the course of a year. It gave an answer which seemed to check out with other sources and then—it gratuitously added this erroneous comment: “This is called a light year.” 

What? 

No. A “light year” is the distance light travels in a year, not how far the sun travels in a year. 

What was more disturbing is that the answer was the first thing I saw. The search engine didn’t ask me if I wanted to try out an experimental AI system. It presented it as “the answer.”

But wait. There’s more. A few hours later, I demo’ed this and the offending notion about what constituted a light year was gone from the answer. Coincidence? 

AI generated. I asked for a forest with rabbit ears instead of leaves. Does this fit the bill?

A few weeks later, I happened to be at a dinner and the conversation turned to Arabic. I mentioned that I had tried to learn a little in preparation for a possible assignment for IBM. I said that, in Arabic, verbs as well as nouns and adjectives are “gendered.” Someone said, “Oh, yes, it’s the same in Spanish.” No, it’s not. I checked with a query—not because I wasn’t sure—but in order to have “objective proof.” To my astonishment, when I asked, “Which language have gendered verbs, the answer came back to say that this was true of Romance languages and Slavic languages. It not true of Romance languages. Then, the AI system offered an example. That’s nice. But what the “example” actually shows is the verb not changing with gender. The next day, I went to replicate this error and it was gone. Coincidence?

Last Saturday, at the “Geezer’s Breakfast,” talk turned to politics and someone asked whether Alaska or Greenland was bigger. I entered a query something like: “Which is bigger? Greenland or Alaska.” I got back an AI summary. It compared the area of Greenland and Iceland. Following the AI summary were ten links, each of which compared Greenland and Iceland. I turned the question around: “Which is larger? Alaska or Greenland?” Now, the AI summary came back with the answer: “Alaska is larger with 586,000 square miles while Greenland is 836,300 square miles.”

AI generated. I asked for a map of the southern USA with the Gulf of Mexico labeled as “The Gulf of Ignorance” (You ready for an AI surgeon?)



What?? 

When I asked the same question a few minutes later, the comparison was fixed. 

So…what the hell is going on? How is the AI system repairing its answers? Several possibilities spring to mind. 

There is a team of people “checking on” the AI answers and repairing them. That seems unlikely to scale. Spot checking I could understand. Perhaps checking them in batch, but it’s as though the mistakes trigger a change that fixes that particular issue. 

Way back in the late 1950’s/early 1960’s, Arthur Lee Samuel developed a program to play checkers. The machine had various versions that played against each other in order to improve play faster than could be done by having the checker player play human opponents. This general idea has been used in AI many times since. 

One possible explanation of the AI self-correction is that the AI system has a variety of different “versions” that answer question. For simplicity of explanation, let’s say there are ten, numbered 1 through 10. Randomly, when a user asks a question, they get one version’s answer; let’s say they get an answer based on version 7. After the question is “answered” by version 7, its answer is compared to the consensus answer of all ten. If the system is lucky, most of the other nine versions will answer correctly. This provides feedback that will allow the system to improve. 

There is a more paranoid explanation. At least, a few years ago, I would have considered it paranoid because I like to give people the benefit of the doubt and I vastly underestimated just how evil some of the greediest people on the planet really are. So, now, what I’m about to propose, while I still consider it paranoid, is not nearly so paranoid as it would have seemed a few years ago. 

MORE! MORE! MORE!

Not only have I discovered that the ultra-greedy are short-sighted enough to usher in a dictatorship that will destroy them and their wealth (read what Putin did and Stalin before him), but I have noticed an incredible number of times in the last few years where a topic that I am talking about ends up being followed within minutes by ads about products and services relevant to that conversation. Coincidence?

Possibly. But it’s also possible that the likes of Alexa and Siri are constantly listening in and it is my feedback that is being used to signal that the AI system has just given the wrong answer. 

Also possible: AI systems are giving occasional wrong answers on purpose. But why? They could be intentionally propagating enough lies to make people question whether truth exist but not enough lies to make us simply stop trusting AI systems. Who would benefit from that? In the long run, absolutely no-one. But in the short term, it helps people who aim to disenfranchise everyone but the very greediest. 

Next step: See whether the AI immediately self-corrects even without my indicating that it made a mistake. 


Meanwhile, it should also be noted that promulgating AI is only one prong of a two-pronged attack on natural intelligence. The other prong is the loud, persistent, threatening drumbeat of false narrative excuses for stupidity that we (Americans as well as the world) are supposed to take as excuses. America is again touting non-cures for serious disease and making excuses for egregious security breaches rather than admitting to error and searching for how to ensure they never happen again.

AI-generated image to the prompt: A man trips over a log which makes him spill an armload of cakes. (How exactly was he carrying this armload of cakes? How does one not notice a log this large? Perhaps having three legs makes in more confusing to step over? Are you ready for an AI surgeon now?)

————-

Turing’s Nightmares

Sample Chapter from Turing’s Nightmares: A Mind of its Own

Sample Chapter from Turing’s Nightmares: One for the Road

Sample Chapter from Turing’s Nightmares: To Be or Not to Be

Sample Chapter from Turing’s Nightmares: My Briefcase Runneth Over

How the Nightingale Learned to Sing

Essays on America: The Game

Roar, Ocean, Roar

Dance of Billions

Imagine All the People

Take a Glance; Join the Dance

Life is a Dance

The Tree of Life

Family Matters, Part 3: The Whole is Greater than the Sum of its Parts

27 Saturday May 2017

Posted by petersironwood in America, family, health, The Singularity, Uncategorized

≈ 5 Comments

Tags

AI, Artificial Intelligence, cognitive computing, decision making, family

IMG_8925

 

Some my earliest and fondest memories centered around family dinners at my grandpa and grandma’s house. For Thanksgiving, for example, there was turkey, mashed potatoes, gravy, sweet potatoes, green beans, olives, rolls, salad and several pies for dessert. But beyond the vast array of food, it was fun to see my grandparents, parents, three aunts and three uncles, and various numbers of cousins. On a few occasions, my second cousin George appeared and early on my Aunt Mary and Aunt Emma. All of these people were so different! We had more fun because we were all there together.

You have heard “The Whole is Greater than the Sum of its Parts” before, no doubt, but I think this is what it means when applied to a family setting. All families argue (although ours never did in these larger Holiday settings. And, almost all families love. But a fundamental question is this: do the people in the family tend to “thrive” more than they would on their own. If the family is functional, this should be the case. They balance each other; they support each other; they help each other improve. They cooperate when it counts. You will not always agree on everything. Far from it. You might be a slob like Oscar while your sibling might be very Felix-like. And, you’re both “right” under different circumstances and for different tastes.

Many sports teams will have a variety of people who excel more in running, or in blocking, or in throwing, or scoring. In baseball, for instance, or American Football, there are very different people in different roles, both physically and temperamentally. An offensive lineman in football will typically be stronger and bigger than a quarterback. Moreover, if the lineman gets “angry”, they might be able to block better on the next play. By contrast, the quarterback must remain calm, cool, and confident under pressure. He must try to put away any fear or anger or depression he feels on the way to the huddle before he gets there and certainly before the snap. When teams are working well together, they don’t criticize each other for differences and they work together to win the game rather than wasting time pointing fingers or trying to assign blame. In a baseball or football team, there is no question that the individual does better because of his teammates. Working together they can solve problems, win trophies, and have more fun than they could individually.

IMG_2979

You right eye sees the world a little differently from your left eye. Thank goodness! Your brain normally puts these two someone different flat, 2-D pictures into a 3-D picture! Your brain does not argue as to which one of these views is “correct.” It certainly does not instigate religious wars over it. I say that the brain “normally” does this. However, if a person is born and their eyes do not move or align smoothly, or if one eye is extremely near-sighted, it can happen that the brain “chooses” one eye to pay attention to. In this case, it seems the two images are so discrepant that the brain “gives up” trying to integrate them and instead chooses one image to use. In a condition such as “amblyopia” the brain mainly relies on the input from one eye. This condition is a distinct disadvantage in many sports.

In boxing, for example, it is literally a show-stopper. A fighter might look like hamburger, but the fight goes on. If, however, there is a cut above his or her eye so that blood drips down to obscure vision in one eye, the fight is stopped. That fighter can no longer see in depth (as well as losing some peripheral vision). It is no longer deemed a “fair” fight. Anyway, it seems the human brain does have some limits as to how much two discrepant views can be reconciled, at least when it comes to vision. Is there a limit to how much a family may disagree productively and still be functional? This is a good question, but one to return to later. Instead, let’s first turn to what are called “dysfunctional families.”

We said in a functional family or team, people are better off than they would be doing something on their own. On the other hand, consider a dysfunctional family. Here, people get mostly grief, judgement, criticism, competition, and lies. Why does this happen? Often dysfunctional behaviors are handed down from generation to generation through social learning, among other things. If too many dysfunctional behaviors are in one family, this causes a “vicious circle” that makes things worse and worse. For example, imagine a family is basically healthy but they do not engage in “alternatives thinking.” They see a situation, come up with an idea, and unless there is imminent danger, execute the idea as soon as possible. They will end up in a lot of trouble with that strategy. However, if they don’t engage in blame-finding, but instead they engage in collective improvement, they will learn over time to make fewer and fewer mistakes. People will all benefit from being in the family. But if a family instead fails to consider multiple alternatives before committing to a course of action and has a cycle of blaming each other without ever improving, then it will probably be dysfunctional. People will give more and get less in return than if they have been working alone. That does not mean there are zero benefits within a dysfunctional family. They may still cover for each other, help each other, provide emotional support, etc. But the costs outweigh the benefits in the long run.

People who come from functional families tend to see the world in a very different way as compared with people who come from dysfunctional families. Obviously, there are all sorts of exceptions as well as other factors at play, but other things being equal, these families of origin color our perceptions of daily life and predispose us to certain actions. Depending on the circumstances, it is even true that some of what we think of as “dysfunction” could actually be “function” instead. Suppose, for instance, you and two siblings suddenly found yourself attacked by a bear. It may be the best thing imaginable to take the first action you think of without trying to over-analyze the situation. Or not. It may well depend on the bear. And, therein lies the rub.

IMG_5994

Our own personal experiences are always a teeny sliver of all possible situations. So, your experience with a bear, bee, or bank may be quite different from mine. As a consequence, we may have different ideas about what constitutes function or dysfunction. In terms of the argument I am about to make, it doesn’t really matter which is “better” or “worse.” All that matters is that we agree some families provide a healthier environment than others. And attitudes are not all that are handed down; so are “ways to do things.”

Perhaps the arbitrary nature of what we consider “intelligent” wisdom handed down in families is best illustrated by a story about making a Holiday Ham. In the kitchen, a 10 year old boy asks: “How come you’re slicing off the ends of the ham?”

His mom answers, “Oh, that’s the way your grandpa always did it.”

Son: “So, why did he do it?”

Mom: “Oh, well. Uh. I don’t really know. Let’s go ask him.”

Son: “Hey, Grandpa, how come you cut the ends of the ham off?”

Grandpa: “Well, sonny. It’s because….it’s because…let’s see. That’s way my mom always did it.”

As it turns out, the 90 year old great-grandma was at the feast as well. Though she was a bit hard of hearing, they eventually got her to understand the question and thus she answered, “Oh, I always used to cut off the ends because I only had one small pan and it wouldn’t fit. No reason for you all to do it now.”

And there you have it in a nutshell. We are all walking around with thousands if not millions of little bits of “folk wisdom” we learned through our family interactions. In most cases, we’re not even aware of them. In virtually no case did we ask about where this folk wisdom came from. Have any of us actually tested one of these out in our own life to see whether it still holds up? And then what? Are you going to inform the others in the family that what everyone believes may not actually be true, at least in every case. Maybe. Most do not, in my experience. In addition, it seems that if you are from a “functional” family, you are much more likely to share this kind of experience (but they still don’t do it 100% of the time). People will often be interested in it and want to learn more. If you are from a more dysfunctional family, you might be more likely to realize they would put you down and try to shoot holes in your example. They might laugh at you. They might just not talk to you. So, what do you do?

volleyballvictory

We can extend these ideas to much broader notions such as a clan, a team, a business, a nation. For people who were not lucky enough to grow up in a functional family, the notions of trust and cooperation come hard. And, that’s a sad thing. Because your experience of what a bee or a bear or a bank will tend to be based on your own experience with very little reliance on the experiences of others. You are one person. There are 7 billion on the planet. So, yes, you can rely on your own experience and dismiss everyone else’s. Good luck.

Even a functional family may draw the boundaries around itself so tightly and firmly that anyone “inside” the circle of trust is trusted but anyone outside is fair game to take unfair advantage of. At the same time, such a family regards anyone outside as a threat who must “obviously” be out to get their family. People from this type of family do know cooperation and trust, but find it nearly impossible to extend the concept across boundaries of family, culture, or nation.  They are happy to hear about their brother’s experiences with bees but they are not much interested in the experiences of their cousins from half way around the world.

Everyone must decide for themselves how much to rely on their own experiences and how much to rely on close relatives, authority figures, ancient teachings, or the vast collective experience of humanity. Of course, it doesn’t have to be an either/or thing. You might “weight” different experiences differently. And, that weighting may reasonably be quite different for different types of situations and strangers. For instance, if your cousin is a smoother talker, vastly handsome, and twenty years younger, you might not put much stock in his or her advice about how to “hook up.” You might instead put more credence in someone at work who is in a similar situation. You might put very little stock in the experiences from a culture that relies on arranged marriages. Surprisingly, exactly because they are from a very different situation and therefore a quite different take on matters, they may give you very new and creative ways to approach your situation. For example, you might find that if you “pretend” you are already “pledged” to a partner your parents chose, dating might be less anxiety provoking and more fun. You might actually be more successful. I’m not saying this specific strategy would work or that ideas from other cultures are always better than ones from your own culture. I am just saying that they need not be dismissed out of hand, not because it’s “politically correct” but because it is in your own selfish interest.

I’ve already mentioned in previous blogs that people are highly related and inter-connected via genetics, their environmental interchanges, their informational interchanges and through the emotional tone of their interactions. Because people are highly interconnected, you can find much wisdom in the experiences of others. But there is another, largely underused aspect of this vast inter-relatedness. I call it familial gradient cognition. Or, if you like, “Mom’s somewhat like me.”

To understand this concept and why it is important, let’s first take a medical example. However, this potential type of thinking is not limited to medical problems. It basically applies to everything. So, you have a pain in your right hip. What is the cause and how do you fix it? That’s your question for the doctor, or more likely, nurse practitioner. They will typically ask questions about your activity, diet, what you’ve done lately, when the pain comes and goes etc. They may run various tests and decide you have sciatica. This in turn leads to a number of possible treatments. When I had sciatica, I got referred to a sports medicine doctor and got acupuncture. It worked. (Later, I discovered an even better treatment — the books of John Sarno). Anyway, we would call this a success and it seems like a reasonable process. But is it?

The medical professional’s knowledge is based on watching other experts, book learning, their own experience etc. And so they basically engage in this multiplication of experience. The modern doctor’s observations are based on literally many millions of cases; far more than he or she could possibly observe first hand. But what potentially useful information was completely omitted from the process described above? Hint: blogpost title.

Yes, exactly. Throughout this whole process, no-one asked me whether anyone in my family; e.g., my mom, dad, or brother had had these symptoms. No one asked whether they had had any kind treatment, and if so, what had worked and not worked for them.   Now, my brother, mom and dad are especially closely related but so are my four children and my grandparents, aunts, uncles, nieces, nephews and grandchildren. And, in the most usual cases, it isn’t merely that we share even slightly more genes than all of humanity. We are also likely to share diet, routines, climate, history and family stories and values. These too can play a part in promoting health. For example, did people in your family believe in “toughing it out” or were they more of a hypochondriac?  The chances are, you will tend to have similar attitudes.

In medicine, would it be better to make decisions based, not just on the data of the one individual under treatment, but on the entire tree with more weight given to the data for other individuals based on how closely related they were? Of course, family relations are only one way in which the data of some individuals will be more likely relevant to your case than will others. For instance, people in the same age cohort, people who live in the same area, people who are in similar professions or who work out the same number of hours a week that you do will be, other things being equal, of more relevance than their opposites.

Of course, as I’ve already mentioned, modern medicine does take into account the life experiences of many other people. But these other “people” are completely unknown. Studies are collectively based on a hodgepodge of people. Some studies use random sampling, but that is still going to be a random sample limited by geography, age, condition, etc. Other studies will use “stratified sampling” that will report on various groups differently. Some studies are meta-studies of other studies and so on. But how similar or dissimilar these people were to each other on a thousand or a million potentially relevant factors is more than 99% lost in the reporting of the data. But that doesn’t really matter because the doctor would typically not look at any article in response to your case because he or she will base their judgement on just you and the information they know “in general” which is based on a total mishmash of people.

Imagine instead that every person’s medical issues were known as well as how everyone was related to everyone else, not only genetically but historically, environmentally, etc. And now imagine that in doing diagnosis decisions as well as treatment options, the various trees of people who were “related” to you in these thousands of ways were weighted by how close they were on all these factors. Over time, the factors themselves could become weighted differently under different circumstances and symptoms, but for now, let’s just imagine they are treated equally. It seems clear that this would result in better decision making. Of course, one reason no-one does this today is that keeping track of all that data is mind boggling. Even if you had access to all the relevant data, we can’t layout and overlay all these relationships mentally to make a decision (at least not consciously).

IMG_0469

However, a powerful computer program could do this. And, the result would almost certainly be better decisions. There are obvious and serious ethical concerns about such a system. In addition, the temptation for misuse might be overwhelming. Such a system, if it did exist, would have to be cleverly designed to avoid any one power from “taking it over” for its own ends. There would also have to be a way to use all these similarities and prevent the revelation of the identities of the individuals. All of that, however, is grist for another mill. Let’s return to the basic idea of the decision making by using multiple matrices of similarity to the existing case rather than relying on general rules based on what has been found to be true “of people.”

This may be essentially what the human brain already does. A small town doctor in the last century would see people on multiple occasions; see entire families; and would undoubtedly perceive patterns of similarity that were based on those specific circumstances. The Smith family would all come in with allergies when the cottonwood trees bloomed. And so on. But he or she only sees a limited number of cases even in his entire lifetime. Suppose instead, she or he could “see” millions of cases as well as their relationships to each other? Such a doctor might well be able to perform as well as the computer and much better than they would today.

Can it be better done by collecting huge families of data and having a computer do the decision making? Or can it be done better by giving access to human experts to much larger data bases of inter-related case studies? What are the potential societal and ethical implications and needed safeguards for each approach?

The medical domain is only one of thousands of domains that could do better decision making this way. For example, one could use a similar approach in diagnosing problems with automobiles, tires, students’ learning trigonometry functions, which fertilizers and watering schedules work best for which crops in which soils for what results? You might call this “whole body” decision making. It is a term also reminiscent of the phrase, “Put your whole body into it” (as when cracking a home run into the upper deck!).

It is also reminiscent of the following situation. When you accidentally burn your finger, it does not just affect your finger. You jump back with your whole body. There are longer last effects in your brain, your stress hormones, your blood pressure. And, various organs and cell types will be involved in healing the burn on your finger. Your body works as a whole. But it is not an undifferentiated whole. Your earlobe may not be much involved with healing your finger. It is tuned to have communication paths and supply chains where they are needed. It’s had four billion years to work this out.

Of course, the way the body interacts is largely, though not wholly, determined by architecture. Even if your body decided that your earlobe should be involved, there is no way for the body to do that. To some extent, it can modify the interactions but only within very predefined limits. On the other hand, the brain is much more flexible when it comes relating one thing to another. We can learn virtually any association., But, at least consciously, we are limited to the number of things and experiences we knowingly take into account while making a decision.

What people might say would lead you to believe that they very often base decisions on only one similar case. “Sciatica you say? Oh, yeah. My cousin Billy had that. Had an operation to remove a disk and the pain totally vanished. Of course, three months later it was back. In his …well… back in his back.” It could be the case that there is more sophisticated pattern matching going on than meets the eye. Sadly though, most laboratory experiments reveal that most of the time, under controlled conditions people seem to suffer from a number of reasoning flaws. I believe that the current crop of difficulties people have with reasoning is not inevitable. I think it’s because of cultural stories and with new cultural stories, we could do a better job of thinking. And, we might be able to further multiply our thinking ability by giving the right kind of high speed access to thousands or millions of similar cases along with presentations based on how various cases are related. Or, we could have the computer do it.

Indeed, speaking of “family stories” that are common in our culture, I actually think that we have a “hierarchy” of thinking based on a patriarchal family structure. We do experiments and report on a teeny and largely preset sliver of the reality that was the experiment. A person reads about this and remembers a teeny sliver of what was in the paper. When it comes to a specific case, the person may or may not consciously remember that sliver. This is the “rule based” approach and it is probably better than nothing. A more holistic experience-based approach is to allow the current case to “resonate” with a vast amount of experience.  Of course, both methods can be deployed as well and perhaps there can even be a meaningful dialogue between them. But it may be worth considering taking a more “whole body” approach to complex decision making.


(The story above and many cousins like it are compiled now in a book available on Amazon: Tales from an American Childhood: Recollection and Revelation. I recount early experiences and then related them to contemporary issues and challenges in society).

https://www.amazon.com/author/truthtable

twitter: JCharlesThomas@truthtableJCT

Is Smarter the Answer?

31 Monday Oct 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, learning organization

IMG_1172

Lately, I have been seeing a fair number of questions on Quora (www.quora.com) that basically question whether we humans wouldn’t be “better off” if AI systems do “take over the world.” After all, it is argued, an AI system could be smarter than humans. It is an interesting premise and one worthy of consideration. After all, it is clear that human beings have polluted our planet, have been involved in many wars, have often made a mess of things, and right now, we are a  mere hair’s breadth away from electing a US President who could start an atomic war for no more profound reason than that someone disagreed with him or questioned the size of his hands.

Personally, I don’t think that having AI systems “replace” human beings or “rule them” would be a good thing. There are three main reasons for this. First, I don’t think that the reason human beings are in a mess is because they are not intelligent enough. Second, if AI systems did “replace” human beings, even if such systems were not only more intelligent but also avoided the real reasons for the mess we’re in (greed and hubris, by my lights), they could easily have other flaws of equal magnitude. The third reason is simply that human life is an end in itself, and not a means to an end.  Let us examine these in turn.

First, there are many species of plants and animals on earth that are, by any reasonable definition, much less intelligent than humans and yet have not over-polluted the planet nor put us on the brink of atomic war. There are at least a few other species such as the dolphins that are about as intelligent as we are but who have not had anything like the world-wide negative ecological impact that we have. No, although we often run into individual people who act against our (and their own) interest, and it seems as though we (and they) would be better off if they were more intelligent, I don’t think lack of intelligence (or even education) is the root of the problem with people.

Here are some simple, everyday examples. I went to the grocery store yesterday. When I checked out, someone else packed my groceries. Badly. Indeed, almost every time I go to the store, they pack the groceries badly (if I can’t pack them myself). What do I mean by badly? One full bag had ripe tomatoes at the bottom. Another paper bag was filled with cans of cat food. It was too heavy for the handles. Another bag was packed lightly, but too full so that the handles would break if you hold the bag naturally. It might be tempting to think that this bagger was not very intelligent. I believe that the causes of bad packing are different. First, packers typically (but not universally) pay very little attention to what they are actually doing. They seem to be clearly thinking about something other than what they are doing. Indeed, this described a lot of human activity, at least in the modern USA. Second, packers are in a badly designed system. Once my cart is loaded up, another customer is already having their food scanned on the conveyer belt and the packer is already busy. There is no time to give feedback to the packer on the job they have done. Nor is the situation really very socially appropriate. No matter how gently done, a critique of their performance in front of their colleagues and possibly their manager will be interpreted as an evaluation rather than an opportunity for learning. Even if I did give them feedback, they may or not believe it. It would be better if the packer could follow me home and observe for themselves what a mess they have made of the packing job. I think if they did that a few times, they’d be plenty smart enough to figure out how to pack better.

Unfortunately, packing is not the only example of this type of system. Another common example is that programmers develop software. These people are typically quite intelligent. But they often build their software and never get a chance to see their software in action. Many organizations do not carry out user studies “in the wild” to see how products and services are actually used. It isn’t that the software builders are not smart. But it is problematic that they do not get any real feedback on their decisions. Again, as in the case of the packers, the programmers exist in an organizational structure that makes honest feedback about their errors far too often seem like an evaluation of them, rather than an occasion for learning.

A third example are hotel personnel. A hotel is basically a service business. The cost of the room is a small part of the price. A hotel exists because it serves the customers. Despite this, people behind the desks seldom have incentives and mechanisms to hear, understand and fix problems that their customers encounter. A quintessential example came in Boston when my wife and I were there for a planning meeting for a conference she would be chairing in a few months. When we checked out, the clerk asked whether everything was all right. We replied that the room was too hot but we couldn’t seem to get the air conditioning to work. The clerk said, “Oh, yes! Everyone has that problem. You need to turn on the heater for the A/C to work.” This was a bad temperature control design for starters, but the clerk’s response clearly indicated that they were aware of the problem but had no power (and/or incentive) to fix it.

These are not isolated examples. I am sure that you, the reader, have a dozen more. People are smart enough to see and solve the problems, but that is not their job. Furthermore, they will basically get “shot down” or at best ignored if they try to fix the problem. So, I really don’t think the issue is that people are not “smart enough” to fix many of the problems we have individually.  It is that we design systems that make us collectively not very smart. (Of course, in outrageous cases, even some individual humans are so prideful that they cannot learn from honest feedback from others).

Now, you could say that such systems are themselves a proof that we are not smart enough. However, that is not a very good explanation. There are existence proofs of smarter organizations. The sad part is that they are exceptions rather than rules. In my experience, what keeps people from adopting better organizations; e.g., where people are empowered to understand and fix problems, are hubris and greed, not a lack of intelligence.

Firstly, in many situations, people believe that they already know everything they need in order to do their job. They certainly don’t want public feedback indicating that they are making mistakes (i.e., could improve) and this attitude spreads to their processing of private feedback. You can easily imagine a computer programmer saying, “I’ve been writing code for User Interfaces for thirty years! Now, you’re telling me I don’t know how?” Why can we imagine that so easily? Because the organizations that most of us live in are not organizations where learning to improve is stressed.

In many organizations, the rules, processes, and management structure make very little sense if the main goal is to make the organization as effective as possible. Instead, however, they make perfect sense if the main goal of the organization is to keep the people who have the most power and make the most money to keep having the most power and making the most money. In order to do that in an ongoing basis, it is true that the organization must be minimally competent. If they are a grocery store, they must sell groceries at some profit. If they are a software company, they need to produce some software. If they are a hotel, they can’t simply poison all their potential guests. But to stay in business, none of these organizations must do a stellar and ever-improving job. 

So, from my perspective, the reason that most organizations are not better learning organizations is not that we humans are not intelligent enough. The reason for marginally effective organizations is that the actual goal is mainly to keep people at the top in power. Greed is the biggest problem with people, not lack of intelligence. History shows us that such greed is ultimately self-defeating. Power corrupts all right, and eventually power erodes itself or explodes itself in revolution. But greedy people continue to believe that they can outsmart history. Dictators believe that they will not suffer the same fate as Hitler or Mussolini. CEO’s believe their bad deeds will go unpunished (indeed, often that’s true). So-called leaders often reject criticism by others and eventually spin out of control. That’s hubris.

I see no reason whatever to believe that AI systems, however intelligent, would be more than reflections of greed and hubris. It is theoretically possible to design AI systems without hubris and greed, but it is also quite possible to develop human beings where hubris and greed are not predominant factors in people’s motivation. We all know people who are eager to learn throughout life; who listen to others; who work collaboratively to solve problems; who give generously of their time and money and possessions. In fact, humans are generally very social animals and it is quite natural for us to worry more about our group, our tribe, our country, our family than our own little ego.  How much hubris and greed are in an AI system will very much depend on the nature and culture of the organization that builds it.

Next, let us consider what other flaws AI systems could have.

Author Page on Amazon

The Pros and Cons of AI: Part One

24 Saturday Sep 2016

Posted by petersironwood in health, The Singularity, Uncategorized

≈ 11 Comments

Tags

AI, Artificial Intelligence, cognitive computing, ethics, health care, the singularity, user experience, utopia

IMG_5478

This is the first of three connected blog posts on the appropriate uses and misuses of AI. In this blog post, I’ll look at “Artificial Ingestion.” (Trust me, it will tie back to another AI, Artificial Intelligence).

While ingestion, and therefore “Artificial Ingestion” is a complex topic, I begin with ingestion because it is a bit more divorced from thought itself. It is easier to think of digestion as separate from thinking; that is, to objectify it more than artificial intelligence because in writing about intelligence, it is necessary to use intelligence itself.

Do we eat to live or live to eat? There is little doubt that eating is necessary to the life of animals such as human beings. Our distant ancestors could have taken a greener and more photosynthetic path but instead, we have collectively decided to kill other organisms to garner our energy. Eating has a utilitarian purpose; indeed, it is a vital purpose. Without food, we eventually die. Moreover, the quality and quantity of the food we eat has a profound impact on our health and well-being. Many of us live in a paradoxical time when it comes to food. Our ancestors often struggled mightily to obtain enough food. Our brains are thus genetically “wired” to search for high sugar, high fat, high salt foods. Even though many of us “know” that we ingest too many calories and may have read and believe that too much salt and sugar are bad for us, it is difficult to overcome the “programming” of countless generations. We are also attracted to brightly colored food. In our past, these colors often signaled foods that were especially high in healthful phytochemicals.

Of course, in modern societies of the “Global North” our genetic predispositions toward high sugar, high fat, high salt, highly colored foods are manipulated by greedy corporate interests. Foods like crackers and chips that contain almost nothing of real value to the human diet are packaged to look like real foods. Beyond that, billions of dollars of advertising dollars are spent to convince us that if we buy and ingest these foods it will help us achieve other goals. For example, we are led to believe that a mother who gives her children “food” consisting of little other than sugar and food dye will be loved by her children and they will be excited and happy children. Children themselves are led to believe that ingesting such junk food will lead them to magical kingdoms. Adult males are led to believe that providing the right kinds of high fat, high salt chips will result in male bonding experiences. Adult males are also led to believe that the proper kinds of alcoholic beverages will result in the seduction of highly desirable looking mates.

Over time, the natural act of eating has been enhanced with rituals. Human societies came to hunt and gather (and later farm) cooperatively. In this way, much more food could be provided over a more continuous basis. Rather than fight each other over food, we sit down in a “civilized” manner and enjoy food together. Some people, through a combination of natural talent and training become experts in the preparation of foods. We have developed instruments such as chopsticks, spoons, knives and forks to help us eat foods. Most typically, various cultures have rituals and customs surrounding food. In many cases, these seem to be geared toward removing us psychologically from the life-giving functionality of food toward the communal enjoyment of food. For example, in my culture, we wait to eat until everyone is served. We eat at a “reasonable” pace rather than gobbling everything down as quickly as possible (before others at the table can snatch our portion). If there are ten people at the table and eleven delicious deserts, people turn many social summersaults in order to avoid taking the last one.

For much of our history, food was confined to what was available in the local region and season. Now, many people, but by no means all, are well off enough to buy foods at any season that originally were grown all over the world. When I was a child, very few Americans had even tried sushi, for example, and the very idea of eating raw fish turned stomachs. At this point, however, many Americans have tried it and most who have enjoy it. Similarly, other cuisines such as Indian and Middle Eastern have spread throughout the world in ways that would have been impossible without modern transportation, refrigeration, and modern training with cookbooks, translations, and videos supplementing face to face apprenticeships.

Some of these trends have enabled some people to enjoy foods of high quality and variety. We support many more people on the planet than would have been possible through hunting and gathering. These “advances” are not without costs. First, there are more people starving in today’s world than even existed on the planet 250,000 years ago. So, these benefits are very unevenly distributed. Second, while fine and delicious foods are available to many, the typical diet of many is primarily based on highly processed grains, soybeans, fat, refined sugar, salt and additives. These “foods” contain calories that allow life to continue; however, they lack many naturally occurring substances that help provide for optimal health. As mentioned, these foods are made “palatable” in the cheapest possible way and then advertised to death to help fool people into thinking they are eating well. In many cases, even “fresh” foods are genetically modified through breeding or via genetic engineering to provide foods that are optimized for cheap production and distribution rather than taste. Anyone who has grown their own tomatoes, for example, can readily appreciate that home grown “heirloom” tomatoes are far tastier than what is available in many supermarkets. While home farmers and small farmers have little in the way of government support, at least in the USA, mega-farming corporations are given huge subsidies to provide vast quantities of poor quality calories. As a consequence, low income people can generally not even afford good quality fresh fruits and vegetables and instead are forced through artificially cheap prices to feed their families with brightly packaged but essentially empty calories.

While some people enjoy some of the best food that ever existed, others have very mediocre food and still others have little food of any kind. What comes next? On the one hand, there is a move toward ever more efficient means of production and distribution of food. The food of humans has always been of interest to a large variety of other animals including rats, mice, deer, rabbits, birds, and insects. Insect pests are particularly difficult to deal with. In response, and in order to keep more of the food for “ourselves”, we have largely decided it is worth the tradeoff to poison our food supply. We use poisons that are designed to kill off insect pests but not kill us off, at least not immediately. I grow a little of my own food and some of that food gets eaten by insects, rabbits, and birds. Personally, I cannot see putting poison on my food supply in order to keep pests from having a share. However, I am lucky. I do not require 100% of my crop in order to stay alive nor to pay off the bank loan by selling it all. Because I grow a wide variety of foods in a relatively small space, there is a lively ecosystem and I don’t typically get everything destroyed by pests. Farmers who grow huge fields of corn, however, can be in a completely different situation and a lot of a crop can fall prey to pests. If they have used pesticides in the past, this is particularly true because they have probably poisoned the natural predators of those pests. At the same time, the pests themselves continue to evolve to be resistant to the poisons. In this way, chemical companies perpetuate a vicious circle in which more and more poison is needed to keep the crops viable. Luckily for the chemical companies, the long-term impact of these poisons on the humans who consume them is difficult to prove in courts of law.

There are movements such as “slow food” and eating locally grown food and urban gardens which are counter-trends, but by and large, our society of specialization has moved to more “efficient” production and distribution of food. More people eat out a higher percentage of the time and much of that “eating out” is at “fast food” restaurants. People grab a sandwich or a bagel or a burger and fries for a “quick fix” for their hunger in order to “save time” for “more productive” pursuits. Some of these “more productive” pursuits include being a doctor to cure diseases that come about in part from people eating junky food and spending most of their waking hours commuting, working at a desk or watching TV. Other “more productive” pursuits include being a lawyer and suing doctors and chemical companies for diseases. Yet other “more productive pursuits” include making money by pushing around little pieces of other people’s money. Still other “more productive pursuits” include making and distributing drugs to help people cope with lives where they spend all their time in “more productive pursuits.”

Do we live to eat or eat to live? Well, it is a little of both. But we seem to have painted ourselves into a corner where most people most of the time have forgone the pleasure of eating that is possible in order to eat more “efficiently” so that we can spend more time making more money. We do this in order to…? What is the end game here?

One can imagine a society in which eating itself becomes a completely irrelevant activity for the vast majority of people. Food that requires chewing takes more time so let’s replace chewing with artificial chewing. Using a blender allows food with texture to be quickly turned to a liquid that can be ingested in the minimum necessary time. One extreme science fiction scenario was depicted in the movie “Soylent Green” which, as it turns out, is made from the bodies of people killed to make room for more people. The movie is set in 2022 (not that far away) and was released in 1973. Today, in 2016, there exists a food called “soylent” (https://en.wikipedia.org/wiki/Soylent_(food)) whose inventor, Rob Rhinehart took the name from the movie. It is not made from human remains but the purpose is to provide an “efficient” solution to the Omnivore’s Dilemma (Michael Pollan). More efficient than smoothies, shakes, and soylent are feeding tubes.

Of course, there are medical conditions where feeding tubes are necessary as a replacement or supplement to ordinary eating as is being “fed” via an IV. But is this really where humanity in general needs to be headed? Is eating to be replaced with “Artificial Ingestion” because it is more efficient? We wouldn’t have to “waste our time” and “waste our energy” shopping, choosing, preparing, chewing, etc. if we could simply have all our nutritional needs met via an IV or feeding tube. With enough people opting in to this option, I am sure industrial research could provide ever less invasive and more mobile forms of IV and tube feeding. At last, humanity could be freed from the onerous task of ingestion, all of which could be replaced by “Artificial Ingestion.” The dollars saved could be put toward some more worthy purpose; for example, making a very few people very very rich.

There are, of course, a few problematic issues. For one thing, despite years of research, we are still discovering nutrients and their impacts. Any attempt to completely replace food with a uniform liquid supplement would almost certainly leave out some vital, but as yet undiscovered ingredients. But a more fundamental question is to what end would we undertake this endeavor in the first place? What if the purpose of life is not, after all, to accomplish everything “more efficiently” but rather, what if the purpose of life is to live it and enjoy it? What then?

Author’s Page on Amazon

Turing’s Nightmares

Rules and Standards nearly Dead? 

04 Sunday Sep 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, law, speeding, the singularity, Turing

funnysign

Ever get a speeding ticket that you thought was “silly”? I certainly have. On one occasion, when I was in graduate school in Ann Arbor, I drove by a police car parked in a gas station. It was a 35 mph zone. I looked over at the police car and looked down to check my speed. Thirty-five mph. No problem. Or, so I thought. I drove on and noticed that a few seconds later, the police officer turned his car on to the same road and began following me perhaps 1/4 to 1/2 mile behind me. He quickly zoomed up and turned on his flashing light to pull me over. He claimed he had followed me and I was going 50 mph. I was going 35. I kept checking because I saw the police car in my mirror. Now, it is quite possible that the police car was traveling 50, because he caught up with me very quickly. I explained this to no avail.

The University of Michigan at that time in the late 60’s was pretty liberal but was situated in a fairly conservative, some might say “redneck”, area of Michigan. There were many clashes between students and police. I am pretty certain that the only reason I got a ticket was that I was young and sporting a beard and therefore “must be” a liberal anti-war protester. I got the ticket because of bias.

Many years later, in 1988, I was driving north from New York to Boston on Interstate 84. This particular section of road is three lanes on both sides. It was a nice clear day and the pavement was dry as well as being dead straight with no hills. The shoulders and margins near the shoulders were clear. The speed limit was 55 mph but I was going 70. Given the state of my car, the conditions and the extremely sparse traffic, as well as my own mental and physical state, I felt perfectly safe driving 70. I got a ticket. In this case, I really was breaking the law. Technically. But I still felt it was a bit unjustified. There was no way that even a deer or rabbit, let alone a runaway child could come out of hiding and get to the highway without my seeing them in time to slow down, stop, or avoid them. Years earlier I had been on a similar stretch of road in Eastern Montana and at that time there was no speed limit. Still, rules are rules. At least for now.

“The Death of Rules and Standards” by Anthony J. Casey and Anthony Niblett suggests that advances in artificial intelligence may someday soon replace rules and standards with “micro-directives” tuned to the specifics of time and circumstance which will provide the benefits of rules without the cost of either. “…we suggest…a larger trend toward context specific laws that can adapt to any situation.” This is an interesting thesis and exploring it helps shine some light on what AI likely can and cannot do as well as making us question why we humans have categories and rules at all. Perhaps AI systems could replace human bias and general laws that seem to impose unnecessary restrictions in particular circumstances.

The first quibble with their argument is that no computer, however powerful, could possibly cover all situations. Taken literally, this would require a complete and accurate theory of physics as well as human behavior as well as a knowledge of the position and state of every particle in the universe. Not even post-singularity AI will likely be able to accomplish this. I hedge with the word “likely” because it is theoretically possible that a sufficiently smart AI will uncover some “hidden pattern” that shows that our universe which seems so vast and random can in fact be predicted in detail by a small set of laws that do not depend on details. In this fantasy future, there is no “true” randomness or chaos or butterfly effect.

Fantasies aside, the first issue that must be dealt with for micro-directives to be reasonable would be to have a good set of “equivalence classes” and/or to partition away differences that do not make a difference. The position of the moons of Jupiter shouldn’t make any difference as to whether a speeding ticket should be given or whether a killing is justified. Spatial proximity alone allows us as humans to greatly diminish the number of factors that need to be considered in deciding whether or not a give action is required, permissible, poor, or illegal. If I had gone to court about the speeding ticket on I-84, I might have mentioned the conditions of the roadway and its surroundings immediately ahead. I would not have mentioned anything whatever about the weather or road conditions anywhere else on the planet as being relevant to the safety of the situation. (Notice though, that it did seem reasonable to me, and possibly to you, to mention that very similar conditions many years earlier in Montana gave rise to no speed limit at all.) This gives us a hint that what is relevant or not relevant to a given situation is non-trivially determined. In fact, the “energy crisis” of the early 70’s gave rise to the National Maximum Speed Law as part of the 1974 Federal Emergency Highway Energy Conservation Act. This enacted, among other things, a federal law limiting the speed limit to 55 mph. A New York Times article by Robert A. Hamilton cites a study done of compliance on Connecticut Interstates in 1988 showing that 85% of the drivers violated the 55 mph speed limit!

So,not only would I not received a ticket in Montana in 1972 for driving under similar conditions;  I also would not have gotten a ticket on that same exact stretch of highway for going 70 in 1972 or in 1996. And, in the year I actually got that ticket, 85% of the drivers were also breaking the speed limit. The impetus for the 1974 law was that it was supposed to reduce demand for oil; however, advocates were quick to point out that it should also improve safety. Despite several studies on both of these factors, it is still unclear how much, if any, oil was actually saved and it is also unclear what the impact on safety was. It seems logical that slower speeds should save lives. However, people may go out of their way to get to an Interstate if they can drive much faster on it. So some traffic during the 55 limit would stay on less safe rural roads. In addition, falling asleep while driving is not recommended. Driving a long trip at 70 gets you off the road earlier and perhaps before dusk while driving at 55 will keep you on the road longer and possibly in the dark. In addition, lowering the speed limit, to the extent there is any compliance does not just impact driving; it could also impact productivity. Time spent on the road is (hopefully) not time working for most people. One reason it is difficult to measure empirically the impact of slower speeds on safety is that other things were happening as well. Cars have had a number of features to make them safer over time and seat belt usage has gone up as well. They have also become more fuel efficient. Computers, even very “smart” computers are not “magic.” They cannot completely differentiate cause and effect from naturally occurring data. For that, humans or computers have to do expensive, costly, and ethically problematic field experiments.

Of course, what is true about something as simple as enforcing speed limits is equally or more problematic in other areas where one might be tempted to utilize micro-directives in place of laws. Sticking to speeding laws, micro-directives could “adjust” to conditions and avoid biases based on gender, race, and age, but they could also take into account many more factors. Should the allowable speed, for instance, be based on income? (After all a person making $250K per year is losing more money by driving more slowly than one making $25K/year). How about the reaction time of the driver? How about whether or not they are listening to the radio? As I drive, I don’t like using cruise control. I change my speed continually depending on the amount of traffic, whether or not someone in the nearby area appears to be driving erratically, how much visibility I have, how closely someone is following me and how close I have to be to the car in front and so on. Should all of these be taken into account in deciding whether or not to give a ticket? Is it “fair” for someone with extremely good vision and reaction times to be allowed to drive faster than someone with moderate vision and slow reaction times? How would people react to any such personalized micro-directives?

While the speed ticket situation is complex and could be fraught with emotion, what about other cases such as abortion? Some people feel that abortion should never be legal under any circumstances and others feel it is always the woman’s choice. Many people, however, feel that it is only justified under certain circumstances. But what are those circumstances in detail? And, even if the AI system takes into account 1000 variables to reach a “wise” decision, how would the rules and decisions be communicated?

Would an AI system be able to communicate in such a way as to personalize the manner of presentation for the specific person in the specific circumstances to warn them that they are about to break a micro-directive? In order to be “fair”, one could argue that the system should be equally able to prevent everyone from breaking a micro-directive. But some people are more unpredictable than others. What if, in order to make it so person A is 98% likely to follow the micro-directive, the AI system presents a soundtrack of a screaming child but in order to make person B 98% likely to follow the micro-directive, it only whispers a warning. Now, person B ignores the micro-directive and speeds (which would happen according to the premise 2% of the time). Wouldn’t person B, now be likely to object that if they had had the same warning, they would have not ignored the micro-directive? Conversely, person A might be so disconcerted by the warning that they end up in an accident.

Anyway, there is certainly no argument that our current system of using human judgement is prone to various kinds of conscious and unconscious biases. In addition, it also seems to be the case that any system of general laws ends up punishing people for what is actually “reasonable” behavior under the circumstances and ends up letting people off Scott-free when they do despicable things which are technically legal (absurdly rich people and corporations paying zero taxes comes to mind). Will driverless cars be followed by judge-less and jury-less courts?

Turing’s Nightmares

Abracadabra!

07 Sunday Aug 2016

Posted by petersironwood in apocalypse, The Singularity, Uncategorized

≈ 3 Comments

Tags

"Citizens United", AI, Artificial Intelligence, biotech, cognitive computing, emotional intelligence, ethics, the singularity, Turing

IMG_7241.JPG

Abracadabra! Here’s the thing. There is no magic. Of course, there is the magic of love and the wonder at the universe and so there is metaphorical magic. But there is no physical magic and no mathematical magic. Why do we care? Because in most science fiction scenarios, when super-intelligence happens, whether it is artificial or humanoid, magic happens. Not only can the super-intelligent person or computer think more deeply and broadly, they also can start predicting the future, making objects move with their thoughts alone and so on. Unfortunately, it is not just in science fiction that one finds such impossibilities but also in the pitches of companies about biotech and the future of artificial intelligence. Now, don’t get me wrong. Of course, there are many awesome things in store for humanity in the coming millennia, most of which we cannot even anticipate. But the chances of “free unlimited energy” and a computer that will anticipate and meet our every need are slim indeed.

This all-too popular exaggeration is not terribly surprising. I am sure much of what I do seems quite magical to our cats. People in possession of advanced or different technology often seem “magical” to those with no familiarity with the technology. But please keep in mind that making a human brain “better”, whether by making it bigger, or have more connections, or making it faster —- none of these alterations will enable the brain to move objects via psychokinesis. Yes, the brain does produce a minuscule amount of electricity, but way too little to move mountains or freight trains. Of course, machines can be theoretically be built to wield a lot of physical energy, but it isn’t the information processing part of the system that directly causes something in the physical world. It is through actuators of some type, just as it is with animals. Of course, super-intelligence could make the world more efficient. It is also possible that super-intelligence might discover as yet undiscovered forces of the universe. If it turns out that our understanding of reality is rather fundamentally flawed, then all bets are off. For example, if it turns out that there are twelve fundamental forces in the universe (or, just one), and a super-intelligent system determines how to use them, it might be possible that there is potential energy already stored in matter which can be released by the slightest “twist” in some other dimension or using some as yet undiscovered force. This might appear to human beings who have never known about the other 8 forces let alone how to harness them as “magic.”

There is another more subtle kind of “magic” that might be called mathematical magic. As known for a long time, it is theoretically possible to play perfect chess by calculating all possible moves, and all possible responses to those moves, etc. to the final draws and checkmates. It has been calculated such a calculation of contingencies would not be possible even if the entire universe were a nano-computer operating in parallel since the beginning of time. There are many similar domains. Just because a person or computer is way, way smarter does not mean they will be able to calculate every possibility in a highly complex domain.

Of course, it is also possible that some domains might appear impossibly complex but actually be governed by a few simple, but extremely difficult to discover laws. For instance, it might turn out that one can calculate the precise value of a chess position (encapsulating all possible moves implicitly) through some as yet undiscovered algorithm written perhaps in an as yet undesigned language. It seems doubtful that this would be true of every domain, but it is hard to say a priori. 

There is another aspect of unpredictability and that has to do with random and chaotic effects. Imagine trying to describe every single molecule of earth’s seas and atmosphere in terms of it’s motion and position. Even if there were some way to predict state N+1 from N, we would have to know everything about state N. The effects of the slightest miscalculation of missing piece of data could be amplified over time. So long term predictions of fundamentally chaotic systems like weather, or what your kids will be up to in 50 years, or what the stock market will be in 2600  are most likely impossible, not because our systems are not intelligent enough but because such systems are by their nature not predictable. In the short term, weather is largely, though not entirely, predictable. The same holds for what your kids will do tomorrow or, within limits, what the stock market will do. The long term predictions are quite different.

In The Sciences of the Artificial, Herb Simon provides a nice thought experiment about the temperature in various regions of a closed space. I am paraphrasing, but imagine a dormitory with four “quads.” Each quad has four rooms and each room is partitioned into four areas with screens. The screens are not very good insulators so if the temperature in these areas differ, they will quickly converge. In the longer run, the temperature will tend toward average in the entire quad. In the very long term, if no additional energy is added, the entire dormitory will tend toward the global average. So, when it comes to many kinds of interactions, nearby interactions dominate, but in the long term, more global forces come into play.

Now, let us take Simon’s simple example and consider what might happen in the real world. We want to predict what the temperature is in a particular partitioned area in 100 years. In reality, the dormitory is not a closed system. Someone may buy a space heater and continually keep their little area much warmer. Or, maybe that area has a window that faces south. But it gets worse. Much worse. We have no idea whether the dormitory will even exist in 100 years. It depends on fires, earthquakes, and the generosity of alumni. In fact, we don’t even know whether brick and mortar colleges will exist in 100 years. Because as we try to predict in longer and longer time frames, not only do more distant factors come into play in terms of physical distance. The determining factors are also distant conceptually. In a 100 year time frame, the entire college may or may not exist and we don’t even know whether the determining factor(s) will be financial, astronomical, geological, political, social, physical or what. This is not a problem that will be solved via “Artificial Intelligence” or by giving human beings “better brains” via biotech.

Whoa! Hold on there. Once again, it is possible that in some other dimension or using some other as yet undiscovered force, there is a law of conservation so that going “off track” in one direction causes forces to correct the imbalance and get back on track. It seems extremely unlikely, but it is conceivable that our model of how the universe works is missing some fundamental organizing principle and what appears to us as chaotic is actually not.

The scary part, at least to me, is that some descriptions of the wonderful world that awaits us (once our biotech or AI start-up is funded) is that that wonderful world depends on their being a much simpler, as yet unknown force or set of forces that is discoverable and completely unanticipated. Color me “doubting Thomas” on that one.

It isn’t just that investing in such a venture might be risky in terms of losing money. It is that we humans are subject to blind pride that makes people presume that they can predict what the impact of making a genetic change will be, not just on a particular species in the short term, but on the entire planet in the long run! We can indeed make small changes in both biotech and AI and see improvements in our lives. But when it comes to recreating dinosaurs in a real life Jurassic Park or replacing human psychotherapists with robotic ones, we really cannot predict what the net effect will be. As humans, we are certainly capable of containing and testing and imagining possibilities and slowly testing them as we introduce them. Yeah. That could happen. But…

What seems to actually happen is that companies not only want to make more money; they want to make more money now. We have evolved social and legal and political systems that put almost no brakes on runaway greed. The result is that more than one drug has been put on the market that has had a net negative effect on human health. This is partly because long term effects are very hard to ascertain, but the bigger cause is unbridled greed. Corporations, like horses, are powerful things. You can ride farther and faster on a horse. And certainly corporations are powerful agents of change. But the wise rider is master or partner with a horse. They don’t allow themselves to be dragged along the ground by rope and let the horse go wherever it will. Sadly, that is precisely the position that society is vis a vis corporations. We let them determine the laws. We let them buy elections. We let them control virtually every news medium. We no longer use them to get amazing things done. We let them use us to get done what they want done. And what is that thing that they want done? Make hugely more money for a very few people. Despite this, most companies still manage to do a lot of net good in the world. I suspect this is because human beings are still needed for virtually every vital function in the corporation.

What will happen once the people in a corporation are no longer needed? What will happen when people who remain in a corporation are no longer people as we know them, but biologically altered? It is impossible to predict with certainty. But we can assume that it will seem to us very much like magic.

 

 

 

 

Very.

Dark.

Magic.

Abracadabra!

Turing’s Nightmares

Photo by Nikolay Ivanov on Pexels.com

Old Enough to Know Less

19 Tuesday Jul 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, ethics, machine learning, prejudice, the singularity, Turing

IMG_7308

Old Enough to Know Less?

There are many themes in Chapter 18 of Turing’s Nightmares. Let us begin with a major theme that is actually meant as practical advice for building artificial intelligence. I believe that an AI system that interacts well with human beings will need to move around in physical space and social space. Whether or not such a system will end up actually experiencing human emotions is probably unknowable. I suspect it will only be able to understand, simulate, and manipulate such emotions. I believe that the substance of which something is made typically has deep implications for what it is. In this case, the fact that we human beings are based on a billion years of evolution and are made of living cells has implications about how we experience the world. However, here we are addressing a much less philosophical and more practical issue. Moving around and interacting facilitates learning.

I first discussed this in an appendix to my dissertation. In that, I compared human behavior in a problem solving task to the behavior of an early and influential AI system modestly titled, “The General Problem Solver.” In studying problem solving, I came across two interesting findings that seemed somewhat contradictory. On the one hand, Grand Master chess players had outstanding memory for “real” chess positions (i.e., ones taken from real high level games). On the other hand, think-aloud studies of Grand Masters showed that they re-examined positions that they had already been to earlier in their thinking. My hypothesis was that Grand Masters examined one part of a game tree; examined another part of the game tree and in so doing, updated their general evaluation functions with a slightly altered copy that learned from the exploration so that their evaluation function for this particular position was tuned to this particular position. 

Our movements though space, in particular, provide us with a huge number of examples from which to learn about vision, sound, touch, kinesthetics, smell and their relationships. What we see, for instance, when we walk, is not a random sequence of images (unlike TV commercials!), but ones that have very particular and useful properties. As we approach objects, we most typically get more and more detailed images of those objects. This allows a constant tuning process for our being able to recognize things at a distance and with minimal cues.

An analogous case could be made for getting to know people. We make inferences and assumptions about people initially based on very little information. Over time, if we get to know them better, we have the opportunity to find out more about them. This potentially allows us (or a really smart robot) to learn to “read” people better over time. But it does not always work out that way. Because of the ambiguities of interpreting human actions and motives as well as the longer time delays, learning more about people is not guaranteed as it is with visual stimuli. If a person begins interacting with people who are predefined to be in a “bad” category, experience with that person may be looked at through such a heavy filter that people never change their minds despite what an outside observer might perceive as overwhelming evidence. If a man believes all people who wear hats are “stupid” and “prone to violence” he may dismiss a smart, peaceful person who wears a hat as “the exception that proves the rule” or say, “Well, he doesn’t always wear hats” or “The hats he wears are made by non-hat wearers and that makes him seem peaceful and intelligent.” The continued misperceptions, over-generalizations, and prejudices partly continue because they also form a framework for rationalizing greed and unfairness. It’s “okay” to steal from people who wear hats because, after all, they are basically stupid and prone to violence.

Unfortunately, when it comes to the potential for humans to learn about each other, there are a few people who actually prey on and amplify the unenlightened aspects of human nature because they themselves gain power, wealth, and popularity by doing so. They say, in effect, “All the problems you are experiencing — they are not your fault! They are because of the people with hats!” It’s a ridiculous presumption, but it often works. Would intelligent robots be prone to the same kinds of manipulations? Perhaps. It probably depends, not on a wheelbarrow filled with rainwater, but on how it is initially programmed. I suspect that an “intelligent agent” or “personal assistant” would be better off if it could take a balanced view of its experience rather than one top-down directed by pre-programmed prejudice. In this regard, creators of AI systems (as well as everyone else) would do well to employ the “Iroquois Rule of SIx.” What this rule claims (taken from the work of Paula Underwood) is that when you observe a person’s actions, it is normal to immediately form a hypothesis about why they are doing what they do. Before you act, however, you should typically generate five additional hypotheses about why they do as they do. Try to gather evidence about these hypotheses.

If prejudice and bigotry are allowed to flourish as an “acceptable political position” it can lead to the erosion of peace, prosperity and democracy. This is especially dangerous in a country as diverse as the USA. Once negative emotions about others are accepted as fine and dandy, prejudice and bigotry can become institutionalized. For example, in the Jim Crow South, not only were many if not most individual “Whites” themselves prejudiced; it became illegal even for those unprejudiced whites to sit at the same counters, use the same restrooms, etc. People could literally be thrown in jail simply for being rational. In Nazi Germany, not only were Jews subject to genocide; German non-Jewish citizens could be prosecuted for aiding them; in other words, for doing something human and humane. Once such a system became law with an insane dictator at the helm, millions of lives were lost in “fixing” this. Of course, even having the Allies win World War II did not bring back the six million Jews who were killed. The Germans were very close to developing the atomic bomb before the USA. Had they developed such a bomb in time, with an egomaniacal dictator at the helm, would they have used it to impose such hate of Jews, Gypsies, Homosexuals, people who were differently abled on everyone? Of course they would have. And then, what would have happened once all the “misfits” were eliminated? You guessed it. Another group would have been targeted. Because getting rid of all the misfits would not bring the promised peace and prosperity. It never has. It never will. By its very nature, it never could.

Artificial Intelligence is already a useful tool. It could continue to evolve in even more useful and powerful directions. But, how does that potential for a powerful amplifier of human desire play out if it falls into the hands of a nation with atomic weapons? How does that play out if that nation is headed up by an egomaniac who plays on the very worst of human nature in order to consolidate power and wealth? Will robots be programmed to be “open-minded” and learn for themselves who should be corrected, punished, imprisoned, eliminated? Or will they become tools to eliminate ever-larger groups of the “other” until no-one is left but the man on the hill, the man in the high castle? Is this the way we want the trajectory of primate evolution to end? Or do we find within ourselves, each of us, that more enlightened seed to plant. Could AI instead help us finally overcome prejudice and bigotry by letting us understand more fully the beauty of the spectrum of what it means to be human?

—————————————-

More about Turing’s Nightmares can be found here.Author Page on Amazon

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 661 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...