• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: AI

Cars that Lock too Much

20 Friday Mar 2020

Posted by petersironwood in America, driverless cars, psychology, story, Travel

≈ 2 Comments

Tags

AI, anecdote, computer, HCI, human factors, humor, IntelligentAgent, IT, Robotics, story, UI, UX

{Now, for something completely different, a chapter about “Intelligent Agents” and attempts to do “too much” for the user. If you’ve had similar experiences, please comment! Thanks.}

1B87A4CC-F9EC-456F-B610-276A660E6E4A

At last, we arrive in Kauai, the Garden Island. The rental car we’ve chosen is a bit on the luxurious side (Mercury Marquis), but it’s one of the few with a trunk large enough to hold our golf club traveling bags.  W. has been waiting curbside with our bags while I got the rental car and now I pull up beside her to load up. The policeman motioning for me to keep moving can’t be serious, not like a New York police officer. After all, this is Hawaii, the Aloha State.  I get out of the car and explain, we will just be a second loading up. He looks at me and then at my rental car and then back to me with a skeptical scowl.  He shrugs ever so slightly which I take to mean an assent. “Thanks.” W. wants to throw her purse in the back seat before the heavy lifting starts. She jerks on the handle. The door is locked.  

“Why didn’t you unlock the door” she asks, with just a hint of annoyance in her voice.  After all, it has been a very long day since we arose before the crack of dawn and drove to JFK in order to spend the day flying here.  

“I did unlock the door,” I counter.  

“Well, it’s locked now.” She counters my counter. 

I can’t deny that, so I walk back around to the driver’s side, and unlock the door with my key and then push the UNLOCK button which so nicely unlocks all the doors.  

The police officer steps over, “I thought you said, you’d just be a second.”

“Sorry, officer”, I reply.  “We just need to get these bags in.  We’ll be on our way.” 

Click.

W. tries the door handle.  The door is locked again.  “I thought you went to unlock the door,” she sighs.

“I did unlock the door.  Again.  Look, I’ll unlock the door and right away, open it.”  I go back to the driver’s side and use my key to unlock the door.  Then I push the UNLOCK button, but W’s just a tad too early with her handle action and the door doesn’t unlock. So, I tell her to wait a second.  

man riding on motorcycle

Photo by Brett Sayles on Pexels.com

“What?”  This luxury car is scientifically engineered not to let any outside sounds disturb the driver or passenger.  Unfortunately, this same sophisticated acoustic engineering also prevents any sounds that the driver might be making from escaping into the warm Hawaiian air. I push the UNLOCK button again.  Wendy looks at me puzzled.

I see dead people in my future if we don’t get the car loaded soon. For a moment, the police officer is busy elsewhere, but begins to stroll back toward us. I rush around the car and grab at the rear door handle on the passenger side. 

But just a little too late.  

“Okay,” I say in an even, controlled voice.  “Let’s just put the bags in the trunk.  Then we’ll deal with the rest of our stuff.” 

The police officer is beginning to change color now, chameleon like, into something like a hibiscus flower. “Look,” he growls. “Get this car out of here.”

“Right.” I have no idea how we are going to coordinate this. Am I going to have to park and drag all our stuff or what? Anyway, I go to the driver’s side and see that someone has left the keys in the ignition but locked the car door; actually, all the car doors. A terrifying thought flashes into my mind. Could this car have been named after the “Marquis de Sade?” That hadn’t occurred to me before. 

auto automobile automotive car

Photo by Dom J on Pexels.com

Now, I have to say right off the bat that my father was an engineer and some of my best friends are engineers. And, I know that the engineer who designed the safety locking features of this car had our welfare in mind. I know, without a doubt, that our best interests were uppermost. He or she was thinking of the following kind of scenario. 

“Suppose this teenage couple is out parking and they get attacked by the Creature from the Black Lagoon. Wouldn’t it be cool if the doors locked just a split second after they got in. Those saved milliseconds could be crucial.”

Well, it’s a nice thought, I grant you, but first of all, teenage couples don’t bother to “park” any more. And, second, the Creature from the Black Lagoon is equally dated, not to mention dead. In the course of our two weeks in Hawaii, our car locked itself on 48 separate, unnecessary and totally annoying occasions.  

And, I wouldn’t mind so much our $100 ticket and the inconvenience at the airport if it were only misguided car locks. But, you and I both know that it isn’t just misguided car locks. No, we are beginning to be bombarded with “smart technology” that is typically really stupid. 

man in black suit sitting on chair beside buildings

Photo by Andrea Piacquadio on Pexels.com

As another case in point, as I type this manuscript, the editor or sadistitor or whatever it is tries to help me by scrolling the page up and down in a seemingly random fashion so that I am looking at the words I’m typing just HERE when quite unexpectedly and suddenly they appear HERE. (Well, I know this is hard to explain without hand gestures; you’ll have to trust me that it’s highly annoying.) This is the same “editor” or “assistant” or whatever that allowed me to center the title and author’s names. Fine. On to the second page. Well, I don’t want the rest of the document centered so I choose the icon for left justified. That seems plausible enough. So far, so good. Then, I happen to look back up to the author’s names. They are also left-justified. Why?  

Somehow, this intelligent software must have figured, “Well, hey, if the writer wants this text he’s about to type to be left-justified, I’ll just bet that he or she meant to left-justify what was just typed as well.” Thanks, but no thanks. I went back and centered the author’s names. And then inserted a page break and went to write the text of this book.  But, guess what? It’s centered. No, I don’t want the whole book centered, so I click on the icon for left-justification again. And, again, my brilliant little friend behind the scenes left-justifies the author’s names. I’m starting to wonder whether this program is named (using a hash code) for the Marquis de Sade.  

On the other hand, in places where you’d think the software might eventually “get a clue” about my intentions, it never does. For example, whenever I open up a “certain program,” it always begins as a default about 4 levels up in the hierarchy of the directory chain. It never seems to notice that I never do anything but dive 4 levels down and open up files there. Ah, well. This situation came about in the first place because somehow this machine figures that “My Computer” and “My hard-drive” are SUB-sets of “My Documents.” What?  

680174EA-5910-4F9B-8C75-C15B3136FB06_1_105_c

Did I mention another “Intelligent Agent?”…Let us just call him “Staple.” At first, “Staple” did not seem so annoying. Just a few absurd and totally out of context suggestions down in the corner of the page. But then, I guess because he felt ignored, he began to become grumpier. And, more obnoxious. Now, he’s gotten into the following habit. Whenever I begin to prepare a presentation….you have to understand the context. 

In case you haven’t noticed, American “productivity” is way up. What does that really mean? It means that fewer and fewer people are left doing the jobs that more and more people used to do. In other words, it means that whenever I am working on a presentation, I have no time for jokes. I’m not in the mood. Generally, I get e-mail insisting that I summarize a lifetime of work in 2-3 foils for an unspecified audience and an unspecified purpose but with the undertone that if I don’t do a great job, I’ll be on the bread line. A typical e-mail request might be like this:

“Classification: URGENT.

“Date: June 4th, 2002.

“Subject: Bible

“Please summarize the Bible in two foils. We need this as soon as possible but no later than June 3rd, 2002. Include business proposition, headcount, overall costs, anticipated benefits and all major technical issues. By the way, travel expenses have been limited to reimbursement for hitchhiking gear.”

Okay, I am beginning to get an inkling that the word “Urgent” has begun to get over-applied. If someone is choking to death, that is “urgent.” If a plane is about to smash into a highly populated area, that is “urgent.” If a pandemic is about to sweep the country, that is “urgent.” If some executive is trying to get a raise by showing his boss how smart he is, I’m sorry, but that might be “important” or perhaps “useful” but it is sure as heck not “urgent.”  

All right. Now, you understand that inane suggestions, in this context, are not really all that appreciated. In a different era, with a different economic climate, in an English Pub after a couple of pints of McKewan’s or McSorely’s, or Guinness, after a couple of dart games, I might be in the mood for idiotic interruptions. But not here, not now, not in this actual and extremely material world.

So, imagine my reaction to the following scenario. I’m attempting to summarize the Bible in two foils and up pops Mr. “Staple” with a question. “Do you want me to show you how to install the driver for an external projector?” Uh, no thanks. I have to admit that the first time this little annoyance appeared, I had zero temptation to drive my fist through the flat panel display. I just clicked NO and the DON’T SHOW ME THIS HINT AGAIN. And, soon I was back to the urgent job of summarizing the Bible in two foils. 

About 1.414 days later, I got another “urgent” request.

“You must fill out form AZ-78666 on-line and prepare a justification presentation (no more than 2 foils). Please do not respond to this e-mail as it was sent from a disconnected service machine. If you have any questions, please call the following [uninstalled] number: 222-111-9999.”  

Sure, I’m used to this by now. But when I open up the application, what do I see? You guessed it. A happy smiley little “Staple” with a question: 

“Do you want me to show you how to install the driver for an external projector?” 

“No,” I mutter to myself, “and I’m pretty sure we already had this conversation. I click on NO THANKS. And I DON’T WANT TO SEE THIS HINT AGAIN. (But of course, the “intelligent agent,” in its infinite wisdom, knows that secretly, it’s my life’s ambition to see this hint again and again and again).  

A friend of mine did something to my word processing program. I don’t know what. Nor does she. But now, whenever I begin a file, rather than having a large space in which to type and a small space off to the left for outlining, I have a large space for outlining and a teeny space to type. No-one has been able to figure this out. But, I’m sure that in some curious way, the software has intuited (as has the reader) that I need much more time spent on organization and less time (and space) devoted to what I actually say. (Chalk a “correct” up for the IA. As they say, “Even a blind tiger sometimes eats a poacher.” or whatever the expression is.)

Well, I shrunk the region for outlining and expanded the region for typing and guess what? You guessed it! Another intelligent agent decided to “change my font.” So, now, instead of the font I’m used to … which is still listed in the toolbar the same way, 12 point, Times New Roman … I have a font which actually looks more like 16 point. And at long last, the Intelligent Agent pops up with a question I can relate to! “Would you like me to install someone competent in the Putin misadminstration?”

What do you know? “Even a blind tiger sometimes eats a poacher.”

7B292613-361F-4989-B9AC-762AB956DECD


 

Author Page on Amazon

Start of the First Book of The Myths of the Veritas

Start of the Second Book of the Myths of the Veritas

Table of Contents for the Second Book of the Veritas

Table of Contents for Essays on America 

Index for a Pattern Language for Teamwork and Collaboration  

Essays on America: The Temperature Gauge

09 Thursday Jan 2020

Posted by petersironwood in America, apocalypse, driverless cars, politics, Uncategorized

≈ 6 Comments

Tags

AI, America, cancer, Democracy, driverless cars, ethics, government

green leafed trees

Photo by Drew Rae on Pexels.com

The sun is shining! Spring is here at last, and the trees are in bloom. You’re driving down the road and you see … 

That your “Engine over-heating” light goes on! 

You think: My engine’s over-heating! 

Or,  you think, it isn’t over-heating at all; I just have a bad sensor. 

Over the next few months, the red light goes on several other times, and each time, you pull over and try to judge whether the engine is really over-heated. No easy task. But you get back in and turn the car on and lo and behold, the light’s no longer on. Aloud, you mutter: “I’ve got to get that damned sensor fixed. Maybe next week.”

In the olden days of driving cars, I had a continuous gauge of the temperature. It was more obvious if it was acting oddly because I had more information. I could track it day to day. If I went on a long trip I could see whether the behavior of the gauge “made sense.” I might go up a long mountain road on a hot sunny day, and I expect to see the temperature gauge climb. On the other hand, if I went back down that same mountain at night and the temperature gauge climbed, I would know to get it checked. 

aerial view of road in the middle of trees

Photo by Deva Darshan on Pexels.com

Suppose instead of a gauge, you or I only get is one bit of information: “Temperature sensor says overheated,”  it’s much harder judge the veracity of the source. But, if we cannot even trust the reliability of the sensor, then we don’t even get one bit of information. Before the light comes on, there are four possible states (not equally likely, by the way, but that’s not important for the following argument). 

Engine OK, Sensor OK; 

Engine OK, Sensor ~OK; 

Engine ~OK, Sensor OK; 

Engine ~OK, Sensor ~OK. 

When the red light comes on, you have some information because the state of:

Engine OK, Sensor OK is eliminated. 

But is it? 

IMG_7209

It certainly is — under a certain set of assumptions — but let’s try to tease apart what those assumptions are and see whether they necessarily hold in today’s world, or in tomorrow’s world. 

Let’s imagine for a moment that your automobile is bewitched and inhabited by an evil demon with limited magical powers, mainly to do with the car itself. If you’ve seen the movie Christine you’ll know what I mean. If you haven’t seen it, please buy the book instead. It’s so much better. But let’s get back to our own evil-spirited car. This car, let’s call him “URUMPUT” because it sounds a bit like a car engine and because — you know, just because. Let’s imagine the car has a lot of mileage and is painted a kind of sickly orange color. The tires are bald, and it’s a real gas guzzler. It’s actually more of a jalopy than a car. Your friends would have assumed you could have done much better, but it is apparently what you’re stuck with for now. 

URUMPUT, unbeknownst to you, is actually out to kill you, but his powers are limited. He cannot simply lock the doors and reroute the exhaust till you pass out from the fumes. So, what it does is to over-ride the sensor so that you get out to take a look at your car so you open the hood and you look inside and BLAM! Down comes the hood on your head with enough force to snap your neck. When your neck is snapped, you don’t die instantaneously. You are aware that something is terribly wrong. Your brain sends signals for you to move; to get the damned hood off; but you can’t move. And, worse, you can’t breathe. Soon, but much too late, you realize something has gone terribly wrong.

You. 

Are. 

Dead! 

That blasted URUMPUT got you. Why?  Just because he could. He paid you no more mind than had you been an ant on the road. He gave you misinformation. That is information that you thought you had because you assumed you were dealing with a system that, although imperfect, had some degree of transparency. You certainly did not think you were dealing with an actively evil agent. But you were. And, now you’re dead. (But go ahead and read the rest as though you were still alive.) 

Of course, in real life, there are no bewitched cars. We all know that. 

86A389C7-4CD7-42E3-ABFA-A555A5BB24CB

Do we? 

Let’s consider how much electronics and “smarts” already exists in cars. The amount will skyrocket with driverless cars. For one thing, the human “occupants” will be able to have much more engaging entertainment. Perhaps more importantly, the “brain” of the car will be able to react to a much wider array of data more quickly than most human drivers could. 

With all the extra sensors, communications, components, functions, protocols, etc. there will be greatly enhanced functionality. 

There will also be all sorts of places where a “bad actor” might intentionally harm the vehicle or even harm the occupants. Your insurance company, for instance, might fake some of the data in the black box of your car to indicate that you drove a lot during nighttime hours. It doesn’t seem to match your recollection, but how would you double check? You grudgingly pay the increased premium. 

white graphing paper

Photo by Pixabay on Pexels.com

Behind on your loan shark payments? Oops? Your driverless car just steered itself off a cliff and all the occupants were killed. 

Oh, but how, you ask, would loan sharks get hold of the software in your car? 

Then, I have to ask you a question right back. Have you been watching the news the last couple of years? People who owe a great deal of money to the wrong people will do anything to avoid the promised punishments that follow non-payment. 

Our government at this point is definitely not much like old time cars that allowed you to see what was going on and make judgments for yourself. This government just sends out signals that say, “Everything’s Fine!” and “Do as I say!” and “Those people NOT like you? They are the cause of all your troubles.” 

D27C46AA-C37E-4AB7-8FE8-8DA937E31A91

That is not transparency. 

That is not even informational. 

That is misinformation. 

But it is not misinformation of the sort where a student says: “Akron is the capital of Ohio.” That’s wrong, but it’s not maliciously wrong. 

When people lose a limb as a result of an accident, cancer, or war, they often experience something called the “Phantom Limb Experience.” They have distinct sensations, including pain, “in” the limb that is no longer there. The engine’s not working but the sensor is also bad. 

That’s where we are. 

The engine’s not working. The feedback to us about whether it’s working is also malicious misinformation. 

We have the Phantom Limb Experience of having a government that is working for American interests. 

We need to regrow the missing limb or get a really good prosthetic. 

We need straight information from the government which is supposed to take input from all of us and then make decisions for all of us. It’s never been perfect, but this is the first time it is not even trying or pretending to be fair or even accurate. People in top level positions in our government think that their oath of office is a joke. 

We live in a monster car — and not the fun kind — the Christine kind. 

The engine’s not working. And the sensor light means nothing. If you look under the hood to find out what’s really going on, you’d better have a partner ready to grab the hood and prevent it from being slammed down on your head. Because URUMPUT would do it with as little regard for you as he would have to out and destroy any other whistleblower. 

blur close up design focus

Photo by Pixabay on Pexels.com

———————————————

The Invisibility Cloak of Habit

Author Page on Amazon

Story about Driverless Cars (from Turing’s Nightmares). 

A Once-Baked Potato

28 Saturday Sep 2019

Posted by petersironwood in America, driverless cars, politics, psychology

≈ 7 Comments

Tags

AI, automation, driverless cars, life, politics, truth

A Once-Baked Potato 

closeup photo of potatoes

Photo by Pixabay on Pexels.com

I’m really not ready to go for a long, high speed trip in a completely automated car. 

empty concrete road near trees

Photo by Alec Herrera on Pexels.com

I say that because of my baked potatoes. One for me. One for my wife. 

I’ve done it many times before. Here is my typical process. I take out a variety of vegetables to chop and chop the broccoli, red onion, garlic, red pepper while the potatoes are in the microwave. I put them in for some time like: 4:32 minutes and then, when that times out, I “test” the potatoes with a fork and put them in for more time. Actually, before I even take them out to use the “fork test” I shake the potatoes. I can tell from the “feel” whether they are still rock hard. If they are marginal, then, I use the more sensitive “fork test.”  Meanwhile, I chop more vegetables and take out the cheese. I test the potatoes again. At some point, they are well done and I slather them up with butter and cheese and then add the chopped Vegetables. 

food healthy vegetables kitchen

Photo by Pixabay on Pexels.com

Delicious. 

But today is different. 

I pushed a button on the microwave that says, “Baked Potato.” Right away, I think: “Baked potato? I’m not putting in a baked potato. I’m putting in a raw potato. You have a button labelled ‘Popcorn’ — it doesn’t say, ‘Popped Corn’ so … ? Anyway, I decided to give it a try. 

The first disadvantage I see is that I have no idea whatsoever how long this process is going to take. I assume it has to take at least four and a half minutes. When I cook it via my usual process, it’s on “high” or “full power.” So, unless the microwave has a “hidden” higher power level that it allows its internal programs to have access to but not its end users, it seems I have at least 4 1/3 minutes to chop. 

Changing the way you do things always causes a little bit of discomfort, though often, a feeling of adventure outweighs that cautionary urge. In this case, I felt a lot of discomfort. The microwave can’t feel how done the potato is so it must be using some other sensor or sensors — likely moisture — though there may be other ways to do it. How do I know that the correlation between how I measure “doneness” and how the microwave measures “doneness” is even moderate? I am also a little concerned that there are, after all, two potatoes, not just one. There was no way to tell the machine that I had two potatoes. I decided that it was likely that the technical problems had been solved. 

Why? Certainly not because I have great faith in large multinational corporations to “do what’s right” rather than do what’s expedient. Once upon a time, not so many years ago, that really was my default assumption. But no longer. Too many lies by too many corporations about too many separate topics. Once upon a time, the government held some power to hold corporations accountable for their actions. Now, the power seems to have shifted so that many politicians — too many — are beholden to their corporate owners.  

The corporation just tries to work for its self-interests. They aren’t very good at it, but that’s their goal. 

Among the common ways they fail is by being too conservative. If they are successful by doing things a certain way, they often keep at it despite changes in the technology, the markets, the cost structures, the distribution possibilities, etc. (They are too afraid to push the “Baked Potato” button). At the same time, there seems to be no evil that many of them would foreswear in order to grow their profits; no lie that is too prosperous for them to tell. 

black and grey camera

Photo by Alex Andrews on Pexels.com

Yet, I live, at least for now, in this world surrounded by products made by these companies and interacting with them all the time. I cannot trust them as a whole, but it’s almost impossible not to rely on some of them some of the time. They can’t fool all of the people all of the time. 

I do calculate that if they put these buttons on there and they were horrible, word would get around and they would lose market share. This presumes that there is real competition in the market. 

I think it likely that driverless cars will be “safer” than human drivers on average within ten years, and possibly sooner. My discomfort stems, again, partly from habit, but largely from a lack of confidence in the ethics of corporations. Normally, I would think that when it comes to life and death, at least, I can put some degree of faith in the government to oversee these companies enough to ensure their safety data were accurate. 

But I no longer believe that. And even after Trump resigns or gets impeached & convicted or he flees to Russia, there is no way to know how deeply and pervasively this corrupt misadministration has crept into the ethics of lesser government officials.  Any government official might think: “after all, if the President is flouting the Constitution by using the power of his office for his own benefit, why shouldn’t I? I need a bribe just as much as the next person and I certainly need the money more than Trump did!”

pexels-photo-164527.jpeg

Photo by Pixabay on Pexels.com

Beep. Beep. 

The microwave claims the potatoes are done. 

And so they are. Perfectly. 

There is still hope for America. 

IMG_7590

Maybe I will be able to take that ride after all. 


 

Author Page on Amazon. 

Corn on the Cob

Parametric Recipes and American Democracy 

Pies on Offer

Garlic Cloves and Puffer Fish

The Pros and Cons of AI: Part One

 

Tu-Swift Dreams of Drums

23 Friday Aug 2019

Posted by petersironwood in America, creativity, politics, Uncategorized, Veritas

≈ 1 Comment

Tags

AI, ethics, language, legends, myths, philology, reading, stories, trust

Tu-Swift Dreams of Drums.

brown wooden percussion instruments

Photo by Pixabay on Pexels.com

Tu-Swift’s lids felt heavy. As they fluttered shut, the strange markings on the hides swam before his eyes. In the distance, he could hear drumming. Drumming. Very pleasant. Very nice. Tu-Swift remembered hearing the drumming as She Who Saves Many Lives intoned a long poem for all of the people. It was a poem about animals, and people, and language. Tu-Swift, like all the Veritas, had memorized it at an early age. He knew the prose version as well. As She Who Saves Many Lives sung the ancient song, one of the braves, Stone Chipper, used sign language to portray the same story. Perhaps from working with stone, he looked like stone. The muscles of his chest, shoulders and arms writhed as he moved from position to position. It had been hard for Tu-Swift to follow as a child.

Now, in his half-dream state, Tu-Swift could slow the playing of the memory and the memory became the dream. He could see the positions that Stone Chipper used. Then, an odd thing happened (as they are wont to do in dreams). The arms of Stone Chipper became sticks. And every time that he moved them into a new position, he heard the voice of She Who Saves Many Lives saying the sounds of the animals. 

brown wolf standing on green grass

Photo by Pixabay on Pexels.com

The sounds. Did we steal them? Did we borrow them? How can we steal them? The snake still hisses. The owl still hoots. The bee still buzzes. And before his internal eyes, Tu-Swift saw the snake and the owl and a swarm of buzzing bees all dancing and playing together. Now, they lined up and came toward him. First, the snake flew toward his face hissing – ‘sssssss.’ Just as it reached him, it opened its mouth, sharp fangs, forked tongue, and then the snake veered off. The hoot owl hooted and stretched for Tu-Swift, talons first. The hooting sound became louder and louder: ‘ooooOOOO OOOO!’ But the owl also caromed away. Now, the swarm of bees zipped toward him buzzing all the while. Suddenly, one of the girls Tu-Swift fancied from home, Sooz, appeared before him smiling. Except now she had cat eyes. She said her name, ‘Sooz’,  and nodded to him just as she had when they first met. Now, she did something odd. She waved her right arm into the crook of her left elbow making the sign for snake; then, she quickly turned her hands outward making them into the claws that signified owl and then her fingertips all moved nervously like a swarm of buzzing bees. Now, she flew away from him and as she disappeared into a bright green cloud, she said, “Remember me. Remember Sooz.” 

Tu-Swift muttered in his sleep, “I will Sooz. I will.” 

Shadow Walker chuckled to himself. He looked down to see the fluttering eyes of Tu-Swift who obviously walked now in the shadow world of dreams. He recalled some of the times that Many Paths and Shadow Walker had spoken of each other’s dreams. He had been dreaming of her, in fact, when something inside him told him it was time for him to keep watch and let Tu-Swift sleep. 

Shadow Walker again turned his thought to the girl with the eyes like a cat. She seemed to be telling the truth even though her tale was amazing, if true. Still, she was definitely holding something back. There was something important that she had not yet told them, but he wasn’t sure what it was. Possibly, she herself had done something against the ways of the Veritas. Although…how could she help it if she were stolen as a child? 

Shadow Walker now heard Tu-Swift muttering again, first about drumming, and Cat Eyes, and language. Like all dream mutterings, it made little sense. He would ask him about it upon wakening. Shadow Walker had found that dreams were easily recalled if they were remembered upon waking but seldom recalled if one began the chores of the day. 

Meanwhile, quite oblivious to Shadow Walker, Tu-Swift now found himself dreaming of sitting astride a horse, a giant golden horse. He held ropes in his hands and he could control the horse via these ropes. Jaccim Nohan trotted alongside on another horse and spoke to him in Veritas. They now seemed friends, but that was not surprising in the dream world. Jaccim’s body turned into sticks of firewood, but he continued to talk…although…it wasn’t exactly talking. He was using his stick limbs to form sign language. Yet, Tu-Swift heard it as words spoken in the voice of Jaccim but the words were not ROI but Veritas. He listened to the words and kicked the giant horse firmly but not cruelly and lightly whipped the reins. The giant horse took off galloping up a hill, leapt up into the sky and Tu-Swift was flying atop his horse — sailing through the sky effortlessly though the steady drumming hooves continued even louder than before. 

C3DFBFC1-D938-40E4-861D-15FD4F0FEA65

Now, Tu-Swift had fallen off his horse into a pit of giant snakes – squeeze snakes – who were going to squeeze him to death. Where was his horse? He tried to slide the snakes off of his arms but they wouldn’t go. They could speak his name! “Tu-Swift! Tu-Swift! Wake up!”  

Tu-Swift shook his head and came awake. Shadow Walker was shaking him. “Wake up! Wake up! War drums. We must go. Now. Wake up!” 

“What? Whose war drums?” Tu-Swift tried to focus but it was difficult. 

Shadow Walker took Tu-Swift’s head in his hands and stared into his vacant eyes. “I don’t know. But it isn’t Veritas! Wake! We must go!” 

At last, Tu-Swift returned to this world and he saw Shadow Walker quickly putting their things together for a quick journey. “What of Cat Eyes and the others?” 

Shadow Walker sighed. “I think we may have to leave them here. Or at least Jaccim. He is too hurt to travel quickly.”

eagle in flight

Photo by David Dibert on Pexels.com

———————————-

Author Page on Amazon

Sci-Fi Scenarios about the Future of AI

Pattern Language for Teamwork and Cooperation: Overview

A Story of Early Work in Human Computer Interaction

The Creation Myth of the Veritas

The Myths of the Veritas: The First Ring of Empathy

The Myths of the Veritas: The Second Book

Family Matters, Part 3: The Whole is Greater than the Sum of its Parts

27 Saturday May 2017

Posted by petersironwood in America, family, health, The Singularity, Uncategorized

≈ 3 Comments

Tags

AI, Artificial Intelligence, cognitive computing, decision making, family

IMG_8925

 

Some my earliest and fondest memories centered around family dinners at my grandpa and grandma’s house. For Thanksgiving, for example, there was turkey, mashed potatoes, gravy, sweet potatoes, green beans, olives, rolls, salad and several pies for dessert. But beyond the vast array of food, it was fun to see my grandparents, parents, three aunts and three uncles, and various numbers of cousins. On a few occasions, my second cousin George appeared and early on my Aunt Mary and Aunt Emma. All of these people were so different! We had more fun because we were all there together.

You have heard “The Whole is Greater than the Sum of its Parts” before, no doubt, but I think this is what it means when applied to a family setting. All families argue (although ours never did in these larger Holiday settings. And, almost all families love. But a fundamental question is this: do the people in the family tend to “thrive” more than they would on their own. If the family is functional, this should be the case. They balance each other; they support each other; they help each other improve. They cooperate when it counts. You will not always agree on everything. Far from it. You might be a slob like Oscar while your sibling might be very Felix-like. And, you’re both “right” under different circumstances and for different tastes.

Many sports teams will have a variety of people who excel more in running, or in blocking, or in throwing, or scoring. In baseball, for instance, or American Football, there are very different people in different roles, both physically and temperamentally. An offensive lineman in football will typically be stronger and bigger than a quarterback. Moreover, if the lineman gets “angry”, they might be able to block better on the next play. By contrast, the quarterback must remain calm, cool, and confident under pressure. He must try to put away any fear or anger or depression he feels on the way to the huddle before he gets there and certainly before the snap. When teams are working well together, they don’t criticize each other for differences and they work together to win the game rather than wasting time pointing fingers or trying to assign blame. In a baseball or football team, there is no question that the individual does better because of his teammates. Working together they can solve problems, win trophies, and have more fun than they could individually.

IMG_2979

You right eye sees the world a little differently from your left eye. Thank goodness! Your brain normally puts these two someone different flat, 2-D pictures into a 3-D picture! Your brain does not argue as to which one of these views is “correct.” It certainly does not instigate religious wars over it. I say that the brain “normally” does this. However, if a person is born and their eyes do not move or align smoothly, or if one eye is extremely near-sighted, it can happen that the brain “chooses” one eye to pay attention to. In this case, it seems the two images are so discrepant that the brain “gives up” trying to integrate them and instead chooses one image to use. In a condition such as “amblyopia” the brain mainly relies on the input from one eye. This condition is a distinct disadvantage in many sports.

In boxing, for example, it is literally a show-stopper. A fighter might look like hamburger, but the fight goes on. If, however, there is a cut above his or her eye so that blood drips down to obscure vision in one eye, the fight is stopped. That fighter can no longer see in depth (as well as losing some peripheral vision). It is no longer deemed a “fair” fight. Anyway, it seems the human brain does have some limits as to how much two discrepant views can be reconciled, at least when it comes to vision. Is there a limit to how much a family may disagree productively and still be functional? This is a good question, but one to return to later. Instead, let’s first turn to what are called “dysfunctional families.”

We said in a functional family or team, people are better off than they would be doing something on their own. On the other hand, consider a dysfunctional family. Here, people get mostly grief, judgement, criticism, competition, and lies. Why does this happen? Often dysfunctional behaviors are handed down from generation to generation through social learning, among other things. If too many dysfunctional behaviors are in one family, this causes a “vicious circle” that makes things worse and worse. For example, imagine a family is basically healthy but they do not engage in “alternatives thinking.” They see a situation, come up with an idea, and unless there is imminent danger, execute the idea as soon as possible. They will end up in a lot of trouble with that strategy. However, if they don’t engage in blame-finding, but instead they engage in collective improvement, they will learn over time to make fewer and fewer mistakes. People will all benefit from being in the family. But if a family instead fails to consider multiple alternatives before committing to a course of action and has a cycle of blaming each other without ever improving, then it will probably be dysfunctional. People will give more and get less in return than if they have been working alone. That does not mean there are zero benefits within a dysfunctional family. They may still cover for each other, help each other, provide emotional support, etc. But the costs outweigh the benefits in the long run.

People who come from functional families tend to see the world in a very different way as compared with people who come from dysfunctional families. Obviously, there are all sorts of exceptions as well as other factors at play, but other things being equal, these families of origin color our perceptions of daily life and predispose us to certain actions. Depending on the circumstances, it is even true that some of what we think of as “dysfunction” could actually be “function” instead. Suppose, for instance, you and two siblings suddenly found yourself attacked by a bear. It may be the best thing imaginable to take the first action you think of without trying to over-analyze the situation. Or not. It may well depend on the bear. And, therein lies the rub.

IMG_5994

Our own personal experiences are always a teeny sliver of all possible situations. So, your experience with a bear, bee, or bank may be quite different from mine. As a consequence, we may have different ideas about what constitutes function or dysfunction. In terms of the argument I am about to make, it doesn’t really matter which is “better” or “worse.” All that matters is that we agree some families provide a healthier environment than others. And attitudes are not all that are handed down; so are “ways to do things.”

Perhaps the arbitrary nature of what we consider “intelligent” wisdom handed down in families is best illustrated by a story about making a Holiday Ham. In the kitchen, a 10 year old boy asks: “How come you’re slicing off the ends of the ham?”

His mom answers, “Oh, that’s the way your grandpa always did it.”

Son: “So, why did he do it?”

Mom: “Oh, well. Uh. I don’t really know. Let’s go ask him.”

Son: “Hey, Grandpa, how come you cut the ends of the ham off?”

Grandpa: “Well, sonny. It’s because….it’s because…let’s see. That’s way my mom always did it.”

As it turns out, the 90 year old great-grandma was at the feast as well. Though she was a bit hard of hearing, they eventually got her to understand the question and thus she answered, “Oh, I always used to cut off the ends because I only had one small pan and it wouldn’t fit. No reason for you all to do it now.”

And there you have it in a nutshell. We are all walking around with thousands if not millions of little bits of “folk wisdom” we learned through our family interactions. In most cases, we’re not even aware of them. In virtually no case did we ask about where this folk wisdom came from. Have any of us actually tested one of these out in our own life to see whether it still holds up? And then what? Are you going to inform the others in the family that what everyone believes may not actually be true, at least in every case. Maybe. Most do not, in my experience. In addition, it seems that if you are from a “functional” family, you are much more likely to share this kind of experience (but they still don’t do it 100% of the time). People will often be interested in it and want to learn more. If you are from a more dysfunctional family, you might be more likely to realize they would put you down and try to shoot holes in your example. They might laugh at you. They might just not talk to you. So, what do you do?

volleyballvictory

We can extend these ideas to much broader notions such as a clan, a team, a business, a nation. For people who were not lucky enough to grow up in a functional family, the notions of trust and cooperation come hard. And, that’s a sad thing. Because your experience of what a bee or a bear or a bank will tend to be based on your own experience with very little reliance on the experiences of others. You are one person. There are 7 billion on the planet. So, yes, you can rely on your own experience and dismiss everyone else’s. Good luck.

Even a functional family may draw the boundaries around itself so tightly and firmly that anyone “inside” the circle of trust is trusted but anyone outside is fair game to take unfair advantage of. At the same time, such a family regards anyone outside as a threat who must “obviously” be out to get their family. People from this type of family do know cooperation and trust, but find it nearly impossible to extend the concept across boundaries of family, culture, or nation.  They are happy to hear about their brother’s experiences with bees but they are not much interested in the experiences of their cousins from half way around the world.

Everyone must decide for themselves how much to rely on their own experiences and how much to rely on close relatives, authority figures, ancient teachings, or the vast collective experience of humanity. Of course, it doesn’t have to be an either/or thing. You might “weight” different experiences differently. And, that weighting may reasonably be quite different for different types of situations and strangers. For instance, if your cousin is a smoother talker, vastly handsome, and twenty years younger, you might not put much stock in his or her advice about how to “hook up.” You might instead put more credence in someone at work who is in a similar situation. You might put very little stock in the experiences from a culture that relies on arranged marriages. Surprisingly, exactly because they are from a very different situation and therefore a quite different take on matters, they may give you very new and creative ways to approach your situation. For example, you might find that if you “pretend” you are already “pledged” to a partner your parents chose, dating might be less anxiety provoking and more fun. You might actually be more successful. I’m not saying this specific strategy would work or that ideas from other cultures are always better than ones from your own culture. I am just saying that they need not be dismissed out of hand, not because it’s “politically correct” but because it is in your own selfish interest.

I’ve already mentioned in previous blogs that people are highly related and inter-connected via genetics, their environmental interchanges, their informational interchanges and through the emotional tone of their interactions. Because people are highly interconnected, you can find much wisdom in the experiences of others. But there is another, largely underused aspect of this vast inter-relatedness. I call it familial gradient cognition. Or, if you like, “Mom’s somewhat like me.”

To understand this concept and why it is important, let’s first take a medical example. However, this potential type of thinking is not limited to medical problems. It basically applies to everything. So, you have a pain in your right hip. What is the cause and how do you fix it? That’s your question for the doctor, or more likely, nurse practitioner. They will typically ask questions about your activity, diet, what you’ve done lately, when the pain comes and goes etc. They may run various tests and decide you have sciatica. This in turn leads to a number of possible treatments. When I had sciatica, I got referred to a sports medicine doctor and got acupuncture. It worked. (Later, I discovered an even better treatment — the books of John Sarno). Anyway, we would call this a success and it seems like a reasonable process. But is it?

The medical professional’s knowledge is based on watching other experts, book learning, their own experience etc. And so they basically engage in this multiplication of experience. The modern doctor’s observations are based on literally many millions of cases; far more than he or she could possibly observe first hand. But what potentially useful information was completely omitted from the process described above? Hint: blogpost title.

Yes, exactly. Throughout this whole process, no-one asked me whether anyone in my family; e.g., my mom, dad, or brother had had these symptoms. No one asked whether they had had any kind treatment, and if so, what had worked and not worked for them.   Now, my brother, mom and dad are especially closely related but so are my four children and my grandparents, aunts, uncles, nieces, nephews and grandchildren. And, in the most usual cases, it isn’t merely that we share even slightly more genes than all of humanity. We are also likely to share diet, routines, climate, history and family stories and values. These too can play a part in promoting health. For example, did people in your family believe in “toughing it out” or were they more of a hypochondriac?  The chances are, you will tend to have similar attitudes.

In medicine, would it be better to make decisions based, not just on the data of the one individual under treatment, but on the entire tree with more weight given to the data for other individuals based on how closely related they were? Of course, family relations are only one way in which the data of some individuals will be more likely relevant to your case than will others. For instance, people in the same age cohort, people who live in the same area, people who are in similar professions or who work out the same number of hours a week that you do will be, other things being equal, of more relevance than their opposites.

Of course, as I’ve already mentioned, modern medicine does take into account the life experiences of many other people. But these other “people” are completely unknown. Studies are collectively based on a hodgepodge of people. Some studies use random sampling, but that is still going to be a random sample limited by geography, age, condition, etc. Other studies will use “stratified sampling” that will report on various groups differently. Some studies are meta-studies of other studies and so on. But how similar or dissimilar these people were to each other on a thousand or a million potentially relevant factors is more than 99% lost in the reporting of the data. But that doesn’t really matter because the doctor would typically not look at any article in response to your case because he or she will base their judgement on just you and the information they know “in general” which is based on a total mishmash of people.

Imagine instead that every person’s medical issues were known as well as how everyone was related to everyone else, not only genetically but historically, environmentally, etc. And now imagine that in doing diagnosis decisions as well as treatment options, the various trees of people who were “related” to you in these thousands of ways were weighted by how close they were on all these factors. Over time, the factors themselves could become weighted differently under different circumstances and symptoms, but for now, let’s just imagine they are treated equally. It seems clear that this would result in better decision making. Of course, one reason no-one does this today is that keeping track of all that data is mind boggling. Even if you had access to all the relevant data, we can’t layout and overlay all these relationships mentally to make a decision (at least not consciously).

IMG_0469

However, a powerful computer program could do this. And, the result would almost certainly be better decisions. There are obvious and serious ethical concerns about such a system. In addition, the temptation for misuse might be overwhelming. Such a system, if it did exist, would have to be cleverly designed to avoid any one power from “taking it over” for its own ends. There would also have to be a way to use all these similarities and prevent the revelation of the identities of the individuals. All of that, however, is grist for another mill. Let’s return to the basic idea of the decision making by using multiple matrices of similarity to the existing case rather than relying on general rules based on what has been found to be true “of people.”

This may be essentially what the human brain already does. A small town doctor in the last century would see people on multiple occasions; see entire families; and would undoubtedly perceive patterns of similarity that were based on those specific circumstances. The Smith family would all come in with allergies when the cottonwood trees bloomed. And so on. But he or she only sees a limited number of cases even in his entire lifetime. Suppose instead, she or he could “see” millions of cases as well as their relationships to each other? Such a doctor might well be able to perform as well as the computer and much better than they would today.

Can it be better done by collecting huge families of data and having a computer do the decision making? Or can it be done better by giving access to human experts to much larger data bases of inter-related case studies? What are the potential societal and ethical implications and needed safeguards for each approach?

The medical domain is only one of thousands of domains that could do better decision making this way. For example, one could use a similar approach in diagnosing problems with automobiles, tires, students’ learning trigonometry functions, which fertilizers and watering schedules work best for which crops in which soils for what results? You might call this “whole body” decision making. It is a term also reminiscent of the phrase, “Put your whole body into it” (as when cracking a home run into the upper deck!).

It is also reminiscent of the following situation. When you accidentally burn your finger, it does not just affect your finger. You jump back with your whole body. There are longer last effects in your brain, your stress hormones, your blood pressure. And, various organs and cell types will be involved in healing the burn on your finger. Your body works as a whole. But it is not an undifferentiated whole. Your earlobe may not be much involved with healing your finger. It is tuned to have communication paths and supply chains where they are needed. It’s had four billion years to work this out.

Of course, the way the body interacts is largely, though not wholly, determined by architecture. Even if your body decided that your earlobe should be involved, there is no way for the body to do that. To some extent, it can modify the interactions but only within very predefined limits. On the other hand, the brain is much more flexible when it comes relating one thing to another. We can learn virtually any association., But, at least consciously, we are limited to the number of things and experiences we knowingly take into account while making a decision.

What people might say would lead you to believe that they very often base decisions on only one similar case. “Sciatica you say? Oh, yeah. My cousin Billy had that. Had an operation to remove a disk and the pain totally vanished. Of course, three months later it was back. In his …well… back in his back.” It could be the case that there is more sophisticated pattern matching going on than meets the eye. Sadly though, most laboratory experiments reveal that most of the time, under controlled conditions people seem to suffer from a number of reasoning flaws. I believe that the current crop of difficulties people have with reasoning is not inevitable. I think it’s because of cultural stories and with new cultural stories, we could do a better job of thinking. And, we might be able to further multiply our thinking ability by giving the right kind of high speed access to thousands or millions of similar cases along with presentations based on how various cases are related. Or, we could have the computer do it.

Indeed, speaking of “family stories” that are common in our culture, I actually think that we have a “hierarchy” of thinking based on a patriarchal family structure. We do experiments and report on a teeny and largely preset sliver of the reality that was the experiment. A person reads about this and remembers a teeny sliver of what was in the paper. When it comes to a specific case, the person may or may not consciously remember that sliver. This is the “rule based” approach and it is probably better than nothing. A more holistic experience-based approach is to allow the current case to “resonate” with a vast amount of experience.  Of course, both methods can be deployed as well and perhaps there can even be a meaningful dialogue between them. But it may be worth considering taking a more “whole body” approach to complex decision making.


(The story above and many cousins like it are compiled now in a book available on Amazon: Tales from an American Childhood: Recollection and Revelation. I recount early experiences and then related them to contemporary issues and challenges in society).

https://www.amazon.com/author/truthtable

twitter: JCharlesThomas@truthtableJCT

A Bridge too Far?

12 Saturday Nov 2016

Posted by petersironwood in driverless cars, psychology, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, ethics, Food Safety, Globalization

PicturesfromiPhoneChinaParisPrinceton 131

A Bridge Too Far? Have We Overdone Globalization?

There are many benefits to globalization. Indeed, I have been somewhat involved personally in attempting to make one of the organizations I belong too more global. In the early days of the Association for Computing Machinery’s Special Interest Group in Computer Human Interaction, major conferences were held in North America and most of the attendees were from North America with a good number of European colleagues joining. Over time, there have been more local chapters world wide and we have had our major conference in Europe several times and recently held a very successful conference in South Korea. Others have been held in other continents as well. I have no doubt whatever that this process has brought a wonderful diversity of thought into our field that would not be there if we had stayed focused in North America. Apart from the progress in an academic field, meeting people from all over the world provided a huge opportunity for everyone involved. If you meet decent people from all over the world, it certainly becomes more difficult to “demonize” them or desire your government to bomb them.

Similarly, the economic benefits of “Free Trade” have been touted for a long time and by many economists. Although opinions differ somewhat, most economist believe that the net effect that freer trade has had. for example, on the US economy is good, not only in providing cheaper goods for consumers but ultimately creating more jobs than are lost. Of course, if you are one of the people whose job is lost and you have almost no prospect of getting one at equal or greater pay, that is small comfort. I am willing to grant that, on average, it makes more sense from an efficiency standpoint to have the “cheapest” place produce goods and services, other things being equal.

Naturally, other things are seldom equal and jobs often shift overseas from North America and Europe to places who not only give less money to their workers but where they have very lax safety conditions, loose child labor laws, loose if any controls on environmental impact and allow harassment of workers. In addition, there can be unanticipated costs associated with coordination across time zones, cultures, and educational backgrounds. The predicted savings of moving operations overseas are not always realized.

I have seen all of these issues been addressed before but I would like to focus on another issue: the impact of situational ethics. We all like to believe that we are one of the “good guys.” We like to believe that we (and indeed, most people) behave ethically most of the time and it is only a few “bad apples” who behave unethically. When people’s behavior has actually been studied though, what we see is a more nuanced picture. Most people most of the time in most situations, cheat “a little bit” and about as much as they assume other people cheat. However, the propensity to cheat depends a lot on the details of the situation. In particular, people are more likely to cheat or take more than their fair share when they are removed from the situation.

For example, if ten people are sitting around a table passing around a plate of twenty Easter Eggs, the vast majority of people will make a quick calculation and pick two. Indeed if someone is allergic and passes on the eggs leaving two left to share among 9 people, everyone falls all over themselves to offer the eggs to someone else. It’s extremely rare for someone to start by taking six or seven eggs for themselves! No-one would think of taking all twenty!

Now, imagine instead that the Monday after Easter, I bring into my work group (which happens to have ten people) 20 Easter Eggs. I tell everyone at the morning staff meeting that I brought in 20 Easter Eggs and put them in the fridge next to the coffee maker. Let us assume that all ten of us get along pretty well. The chances that someone goes into the break room and takes 3-4 eggs increases hugely over the “sitting around the table” scenario.

 

We humans are social animals. We respond to social cues and we care about our reputation. Most of us experience empathy. If we are sitting around the table and take more than our share of eggs, we don’t just worry that others will judge us badly. We genuinely do not want to “feel the pain” of someone looking forward to the eggs and not getting any. That’s just the way we are wired. If we take more than our share from the break room however, it is far more abstract. We don’t really know whether everyone will really want Easter Eggs. And, even if we are pretty sure they will, we don’t know who the last person will be. We can’t really “see” the disappointment of the last few people who open the fridge.

Now, consider how this plays out in commerce. Imagine that you are a baker of bread for a local village. It doesn’t really matter that much whether your are the baker for a small town in Vermont, Germany, England, France or Egypt. Of course, you want to make enough money to survive, but you want to make really good bread. You want people to say good things about your bread. You want to think of these faces that you recognize having your bread be a part of the pleasure of their meal. You want to be part of having them and their family grow up and thrive because of your bread. 

Now, contrast this with being a worker in a bread factory that makes bread that is shipped all over the country. Again, it doesn’t matter that much what the country is but let’s assume it’s a factory outside of Paris. You feel some obligation to do a good job, but you are far less invested in making sure your bread is especially good than if you were the baker in a small town. Part of the reason for that is that you won’t really see that many faces of the people eating your bread. Part of the reason is also that you are following a recipe and a procedure that someone else constructed for you. Of course, other things being equal, you’d like to make a good product and do a good job — and not just because you could lose your job if you don’t. It’s more than that. Most people really do want to do a quality job. But suppose one day the boss comes in and says, “Hey folks. Bad news. Profits are down and costs are up. We are really getting squeezed. We are going to change our recipe to put a little more water and a little less egg in the bread. It will save costs and we’ll be able to stay in business. And, you’ll be able to keep your job.” You realize that this will make the bread a tiny bit less tasty and a bit less nutritious but still —- you do need to keep your job. So, you go along as do your fellow workers.

Now suppose a few months later, the boss comes in and says, “More bad news. We are going to have to cut costs still further. We are going to add more water, but to keep the bread from being too runny to bake properly, we are going to add a bit of glue. Most people won’t notice the taste and most people won’t get sick enough to die from it, although a few might. Still, we need this to keep in business.” I believe that at this point, there would be a rebellion. You would not go along with this and neither would most of your colleagues. But we need to remember that in France, there are strong unions, the population reads, there is a government that you may not agree with but that you count on to enforce laws. You may not be able to get a job as good as the bread factory job, but you will get something. If all else fails, you have friends and relatives you can count on as well as a financial safety net. You have reasonable costs for health care.

Now suppose instead that this factory is not outside Paris and shipping bread to France. Instead, let’s imagine it’s in a country that is far more authoritarian and hierarchical. You are in a small village constructed solely for the purpose of making bread at a giant factory. You are not making bread for your fellow citizens. This bread is being shipped overseas to somewhere you have very little knowledge of and no realistic prospects of ever visiting. Even under these circumstances, I believe the vast majority of people would like to do the right thing; they would like to do a good job. However, you are being told to adulterate the bread in order to keep your job. You already owe two months rent on the company housing that you would have no way to pay off without your job. You have zero other job prospects in any case. There is nothing in the town except the bread factory. You cannot call up “Sixty Minutes” or the local newspaper or the police and protest this. You know from your own personal experience that every other worker is likely to go along. And so do you. It isn’t because the people in all these previous scenarios are “good” while the ones in this scenario are “bad.” It’s because the scenario has become increasingly divorced from our natural social cues for doing the “right thing.”

In essence, this points to a “hidden cost” of globalization. It isn’t just a question of efficiency. As producers become more and more isolated from the consumers in terms of geography, culture, and physical contact and as more and more steps intervene, there is an increasing process of abstraction. Along with increasing abstraction, it becomes easier and easier for people to avoid, ignore or actively work against ethical principles. (By the way, there is another hidden cost to globalization; the bread may not be as tuned to local tastes as bread made in the village but that’s a topic for another post).

Simultaneously, there is another sort of abstraction going on. The top executives of the hypothetical “bread company” are not themselves making bread. They are not meeting with consumers. What they are looking at is numbers; specifically, they are looking at the profit and loss, ROI, their stock value. So for them, in fact, it has very little if anything to do with nutrition, bread, pleasure of eating, or ethics. It is all a numbers game. The numbers do not typically reflect much about ethics. Of course, there is a chance that poison bread may come to light and that might be slightly embarrassing, but the chance of the top executives going to jail is slim. True, they may scapegoat the local manager or some of the workers, but they themselves are fairly immune and they know this. But it isn’t only that they are immune from prosecution. It is also because they will not have to look the sick end users in the eye.

Besides the abstraction that comes from remote geography and the abstraction that comes from monetization of interaction (as opposed to actual face to face interaction), there is another kind of abstraction that makes unethical behavior easier. Discussions of driverless cars lately have quite rightly begun to focus on ethics. One scenario involves a car having to “decide” whether to run over a small number of children or veer off the road quite possibly killing the driver. Regardless of what you personally think the “right answer” is, I contend that most human drivers in control of such a car would instinctively swerve off the road and avoid the children even though it was likely to result in a serious accident or death for the driver. It would be extremely difficult for most drivers to choose intentionally to run over the children to save their own skins. On the other hand, if you worked at a car company as a programmer, it would be far less stressful to program the car to behavior in that way. It would be easy to rationalize.

“Well, the chances are, this section of code is never going to actually run.”

“Well, the driver after all is the one paying for the car. And, he or she does have the option to over-ride.”

“Well, if I don’t program what I am ordered to program, what is the point really? They will fire me and hire someone else to program it and they will keep doing that until they find somebody who will program it that way.”

All is “well.” Or is it?

But I contend that this same programmer, if they were actually driving the car, seeing the faces of little children, is quite likely to swerve off the road to avoid the kids.

Yes, we humans have developed some fairly elaborate ethical codes, but often we behave “ethically” simply because our sociality is “built in” genetically and guides us to the ethically correct behavior. If we abstract away from social situations, whether through geography, monetization of value, or by programming another entity, our “instinctive” ethical behavior becomes easier and easier to over-ride. Perhaps then, rather than making unethical behavior “easier” for people by removing social cues, we need to re-instate them — perhaps even amplify them. If you really need to send a drone into an elementary school, maybe you need to hear the screams of the unwitting “participants.”

—————————

https://en.wikipedia.org/wiki/The_Honest_Truth_about_Dishonesty#/media/File:The_Honest_Truth_about_Dishonesty.jpg

http://tinyurl.com/hz6dg2

 

Is Smarter the Answer?

31 Monday Oct 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, learning organization

IMG_1172

Lately, I have been seeing a fair number of questions on Quora (www.quora.com) that basically question whether we humans wouldn’t be “better off” if AI systems do “take over the world.” After all, it is argued, an AI system could be smarter than humans. It is an interesting premise and one worthy of consideration. After all, it is clear that human beings have polluted our planet, have been involved in many wars, have often made a mess of things, and right now, we are a  mere hair’s breadth away from electing a US President who could start an atomic war for no more profound reason than that someone disagreed with him or questioned the size of his hands.

Personally, I don’t think that having AI systems “replace” human beings or “rule them” would be a good thing. There are three main reasons for this. First, I don’t think that the reason human beings are in a mess is because they are not intelligent enough. Second, if AI systems did “replace” human beings, even if such systems were not only more intelligent but also avoided the real reasons for the mess we’re in (greed and hubris, by my lights), they could easily have other flaws of equal magnitude. The third reason is simply that human life is an end in itself, and not a means to an end.  Let us examine these in turn.

First, there are many species of plants and animals on earth that are, by any reasonable definition, much less intelligent than humans and yet have not over-polluted the planet nor put us on the brink of atomic war. There are at least a few other species such as the dolphins that are about as intelligent as we are but who have not had anything like the world-wide negative ecological impact that we have. No, although we often run into individual people who act against our (and their own) interest, and it seems as though we (and they) would be better off if they were more intelligent, I don’t think lack of intelligence (or even education) is the root of the problem with people.

Here are some simple, everyday examples. I went to the grocery store yesterday. When I checked out, someone else packed my groceries. Badly. Indeed, almost every time I go to the store, they pack the groceries badly (if I can’t pack them myself). What do I mean by badly? One full bag had ripe tomatoes at the bottom. Another paper bag was filled with cans of cat food. It was too heavy for the handles. Another bag was packed lightly, but too full so that the handles would break if you hold the bag naturally. It might be tempting to think that this bagger was not very intelligent. I believe that the causes of bad packing are different. First, packers typically (but not universally) pay very little attention to what they are actually doing. They seem to be clearly thinking about something other than what they are doing. Indeed, this described a lot of human activity, at least in the modern USA. Second, packers are in a badly designed system. Once my cart is loaded up, another customer is already having their food scanned on the conveyer belt and the packer is already busy. There is no time to give feedback to the packer on the job they have done. Nor is the situation really very socially appropriate. No matter how gently done, a critique of their performance in front of their colleagues and possibly their manager will be interpreted as an evaluation rather than an opportunity for learning. Even if I did give them feedback, they may or not believe it. It would be better if the packer could follow me home and observe for themselves what a mess they have made of the packing job. I think if they did that a few times, they’d be plenty smart enough to figure out how to pack better.

Unfortunately, packing is not the only example of this type of system. Another common example is that programmers develop software. These people are typically quite intelligent. But they often build their software and never get a chance to see their software in action. Many organizations do not carry out user studies “in the wild” to see how products and services are actually used. It isn’t that the software builders are not smart. But it is problematic that they do not get any real feedback on their decisions. Again, as in the case of the packers, the programmers exist in an organizational structure that makes honest feedback about their errors far too often seem like an evaluation of them, rather than an occasion for learning.

A third example are hotel personnel. A hotel is basically a service business. The cost of the room is a small part of the price. A hotel exists because it serves the customers. Despite this, people behind the desks seldom have incentives and mechanisms to hear, understand and fix problems that their customers encounter. A quintessential example came in Boston when my wife and I were there for a planning meeting for a conference she would be chairing in a few months. When we checked out, the clerk asked whether everything was all right. We replied that the room was too hot but we couldn’t seem to get the air conditioning to work. The clerk said, “Oh, yes! Everyone has that problem. You need to turn on the heater for the A/C to work.” This was a bad temperature control design for starters, but the clerk’s response clearly indicated that they were aware of the problem but had no power (and/or incentive) to fix it.

These are not isolated examples. I am sure that you, the reader, have a dozen more. People are smart enough to see and solve the problems, but that is not their job. Furthermore, they will basically get “shot down” or at best ignored if they try to fix the problem. So, I really don’t think the issue is that people are not “smart enough” to fix many of the problems we have individually.  It is that we design systems that make us collectively not very smart. (Of course, in outrageous cases, even some individual humans are so prideful that they cannot learn from honest feedback from others).

Now, you could say that such systems are themselves a proof that we are not smart enough. However, that is not a very good explanation. There are existence proofs of smarter organizations. The sad part is that they are exceptions rather than rules. In my experience, what keeps people from adopting better organizations; e.g., where people are empowered to understand and fix problems, are hubris and greed, not a lack of intelligence.

Firstly, in many situations, people believe that they already know everything they need in order to do their job. They certainly don’t want public feedback indicating that they are making mistakes (i.e., could improve) and this attitude spreads to their processing of private feedback. You can easily imagine a computer programmer saying, “I’ve been writing code for User Interfaces for thirty years! Now, you’re telling me I don’t know how?” Why can we imagine that so easily? Because the organizations that most of us live in are not organizations where learning to improve is stressed.

In many organizations, the rules, processes, and management structure make very little sense if the main goal is to make the organization as effective as possible. Instead, however, they make perfect sense if the main goal of the organization is to keep the people who have the most power and make the most money to keep having the most power and making the most money. In order to do that in an ongoing basis, it is true that the organization must be minimally competent. If they are a grocery store, they must sell groceries at some profit. If they are a software company, they need to produce some software. If they are a hotel, they can’t simply poison all their potential guests. But to stay in business, none of these organizations must do a stellar and ever-improving job. 

So, from my perspective, the reason that most organizations are not better learning organizations is not that we humans are not intelligent enough. The reason for marginally effective organizations is that the actual goal is mainly to keep people at the top in power. Greed is the biggest problem with people, not lack of intelligence. History shows us that such greed is ultimately self-defeating. Power corrupts all right, and eventually power erodes itself or explodes itself in revolution. But greedy people continue to believe that they can outsmart history. Dictators believe that they will not suffer the same fate as Hitler or Mussolini. CEO’s believe their bad deeds will go unpunished (indeed, often that’s true). So-called leaders often reject criticism by others and eventually spin out of control. That’s hubris.

I see no reason whatever to believe that AI systems, however intelligent, would be more than reflections of greed and hubris. It is theoretically possible to design AI systems without hubris and greed, but it is also quite possible to develop human beings where hubris and greed are not predominant factors in people’s motivation. We all know people who are eager to learn throughout life; who listen to others; who work collaboratively to solve problems; who give generously of their time and money and possessions. In fact, humans are generally very social animals and it is quite natural for us to worry more about our group, our tribe, our country, our family than our own little ego.  How much hubris and greed are in an AI system will very much depend on the nature and culture of the organization that builds it.

Next, let us consider what other flaws AI systems could have.

Author Page on Amazon

Pros and Cons of Artificial Intelligence

29 Thursday Sep 2016

Posted by petersironwood in Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, ethics, the singularity, Turing, user experience

IMG_6925

The Pros and Cons of AI Part Three: Artificial Intelligence

We have already shown in the two previous blogs why it more effective and efficient to replace eating with Artificial Ingestion and to replace sex with Artificial Insemination. In this, the third and final part, we will discuss why human intelligence should be replaced with Artificial Intelligence. The arguments, as we shall see, are mainly simple extrapolations from replacing eating and sex with their more effective and efficient counterparts.

Human “intelligence” is unpredictable. In fact, all forms of human behavior are unpredictable in detail. It is true that we can often predict statistically what people will do in general. But even those predictions often fail. It is hard to predict whether and when the stock market will go up or down or which movies will be blockbuster hits. By contrast, computers, as well know, never fail. They are completely reliable and never make mistakes. The only exceptions to this general rule are those rare cases where hardware fails, software fails, or the computer system was not actually designed to solve the problems that people actually had. Putting aside these extremely rare cases, other errors are caused by people. People may cause errors because they failed to read the manual (which doesn’t actually exist because to save costs, vendors now expect that users should look up the answers to their problems on the web) or because they were confused by the interface. In addition, some “errors” occur because hackers intentionally make computer systems operate in a way that they were not intended to operate. Again, this means human error was the culprit. In fact, one can argue that hardware errors and software errors were also caused by errors in production or design. If these errors see the light of day, then there were also testing errors. And if the project ends up solving problems that are different from the real problems, then that too is a human mistake in leadership and management. Thus, as we can see, replacing unpredictable human intelligence with predictable artificial intelligence is the way to go.

Human intelligence is slow. Let’s face it. To take a representative activity of intelligence, it takes people seconds to minutes to do simple square roots of 16 digit numbers while computers can do this much more quickly. It takes even a good artist at least seconds and probably minutes to draw a good representation of a birch tree. But google can pull up an excellent image in less than a second. Some of these will not actually be pictures of birch trees, but many of them will.

Human intelligence is biased. Because of their background, training and experience, people end up with various biases that influence their thinking. This never happens with computers unless they have been programmed to do something useful in which case, some values will have to be either programmed into it or learned through background, training and experience.

Human intelligence in its application most generally has a conscious and experiential component. When a human being is using their intelligence, they are aware of themselves, the situation, the problem and the process, at least to some extent. So, for example, the human chess player is not simply playing chess; they are quite possibly enjoying it as well. Similarly, human writers enjoy writing; human actors enjoy acting; human directors enjoy directing; human movie goers enjoy the experience of thinking about what is going on in the movie and feeling, to a large degree, what people on the screen are attempting to portray. This entire process is largely inefficient and ineffective. If humans insist on feeling things, that could all be accomplished much more quickly with electrodes.

Perhaps worst of all, human intelligence is often flawed by trying to be helpful. This is becoming less and less true, particularly in large cities and large bureaucracies. But here and there, even in these situations that should be models of blind rule-following, you occasionally find people who are genuinely helpful. The situation is even worse in small towns and farming communities where people are routinely helpful, at least to the locals. It is only when a user finds themselves interacting with a personal assistant or audio menu system with no possibility of a pass-through to a human being that they can rest assured that they will not be distracted by someone actually trying to understand and help solve their problem.

Of course, people in many professions, whether they are drivers, engineers, scientists, advertising teams, lawyers, farmers, police officers etc. will claim that they “enjoy” their jobs or at least certain aspects of them. But what difference does that make? If a robot or AI system can do 85 to 90% of the job in a fast, cheap way, why pay for a human being to do the service? Now, some would argue that a few people will be left to do the 10-15% of cases not foreseen ahead of time in enough detail to program (or not seen in the training data). But why? What is typically done, even now, is to just the let user suffer when those cases come up. It’s too cumbersome to bother with back-up systems to deal with the other cases. So long as the metrics for success are properly designed, these issues will never see the light of day. The trick is to make absolutely sure than the user has no alternative means of recourse to bring up the fact that their transaction failed. Generally, as the recent case with Yahoo shows, even if the CEO becomes aware of a huge issue, there is no need to bring it to public attention.

All things considered, it seems that “Artificial Intelligence” has a huge advantage over “Natural Intelligence.” AI can simply be defined to be 100% successful. It can save money and than money can be appropriately partitioned to top company management, shareholders, workers, and consumers. A good general formula to use in such cases is the 90-10 rule; that is, 90% of the increased profits should go to the top management and 10% should go to the shareholders.

As against increased profits, one could argue that people get enjoyment out of the thinking that they do. There is some truth to that, but so what? If people enjoy playing doctor, lawyer, and truck driver, they can still do that, but at their own expense. Why should people pay for them to do that when an AI system can do 85% of the job at nearly zero costs? Instead of worrying about that, we should turn our attention to a more profound problem: what will top management do with that extra income?

Author Page on Amazon

Turing’s Nightmares

 

 

Pros and Cons of Artificial Insemination

27 Tuesday Sep 2016

Posted by petersironwood in psychology, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, ethics, the singularity, user experience

img_8526

 

The Pros and Cons of AI: Part Two (Artificial Insemination).

Animal husbandry and humane human medical practice offer up many situations where artificial insemination is a useful and efficient technique. It is often used in horse breeding, for example, to avoid the risk of injury that more natural breeding might engender. There are similarly many cases where a couple wants to get pregnant and the “ordinary” way will not work. This could be due to physical problems with the man, the woman, or both. In some cases, it will even be necessary to use sperm from someone who is not going to be the legal father. Generally, the couple will decide it is more acceptable emotionally if the sperm donor is anonymous and the insemination is not done via intercourse.

But what about all those cases where the couple tries and indeed, succeeds, the “old-fashioned way.” An argument could certainly be made that all intercourse should be replaced with AI (artificial insemination).

First, the old-fashioned way often produces emotional bonding between the partners. (Some even call it “making love.”) No-one has ever provided a convincing quantitative economic analysis of why this is beneficial. It is certainly painful when pair-bonded individuals are split apart by divorce or death. AI would not prevent all pair bonding, but it could help reduce the risk of such bonds being formed.

Second, the old-fashioned way risks the transmission of sexually transmitted diseases. Even when pairs are not trying to get pregnant and even when they have the intention of using forms of “protection”, sometimes passion overtakes reason and people, in the heat of the moment, “forget” to use protection. AI provides an opportunity for screening and for greatly reducing the risk of STDs being spread.

Third, the combinations of genes produced by sexual intercourse are random and uncontrolled. While it is currently beyond the state of the art, one can easily imagine that sometime in this century it will possible to “screen” sperm cells and only chose the “best” for AI.

Fourth, traditional sex if often quite expensive in terms of economic costs. Couples will often spend hours engaging in procreational activities than need only take minutes. Beyond that, traditional sex if often accompanied by special dinners, walks on the beach, playing romantic music, and often couples continue to stay together in essentially unproductive activities even after sex such as cuddling and talking.

There are probably additional reasons why AI makes a lot of sense economically and why it is a lot better than the old-fashioned alternative.

Of course, one could take the tack of considering life as something valuable for the experiences themselves and not merely as a means to an end of higher productivity. This seems a dangerously counter-cultural stand to take in modern American society, but in the interest of completeness, and mainly just to prove its absurdity, let us consider for a moment that sex may have some intrinsic and experiential value to the participants.

Suppose that lovers take pleasure in the sights, sounds, smells, feels, and tastes associated with their partners. Imagine that the sexual acts they engage in provide pleasure in and of themselves. There seems to be a great deal of uncertainty about the monetary value of these experiences since the prices charged for artificial versions of these experiences can easily vary by a factor of ten or more. In fact, there have been reports that some people will only engage in sex that is not paid for directly.

So, on the one hand, we have the provable efficiency and effectiveness of AI. On the other hand, we have human experiences whose value is problematic to quantify. The choice seems obvious. Sometime in this century, no doubt, all insemination will be done artificially so that everyone (or at least some very rich people)  can enjoy the great economic benefits that will come about from the increased efficiency and effectiveness of AI as compared with “natural” sex.

As further proof, if it is needed, imagine two island countries alike in every way in terms of climate, natural beauty, current economic opportunity, literacy and so on. In fact, the only way these two islands differ is that on one island (which we shall call AII for Artificial Insemination Isle) all “sex” is limited to AI whilst on the other island (which we shall call NII for Natural Insemination Isle) sex is natural and people can spend as much or as little time as they like doing it. Now, people are given a choice about which island to live on. Certainly, with its greater prospects of economic growth and efficiency, everyone would choose to live on AII while NII would be virtually empty. Readers will recognize that this is essentially the same argument as to why “Artificial Ingestion” should surely replace “Natural Ingestion” — cheaper, faster, more reliable. If readers see any holes in this argument, I’d surely like to be informed of them.

Turing’s Nightmares

Author Page on Amazon

The Pros and Cons of AI: Part One

24 Saturday Sep 2016

Posted by petersironwood in health, The Singularity, Uncategorized

≈ 10 Comments

Tags

AI, Artificial Intelligence, cognitive computing, ethics, health care, the singularity, user experience, utopia

IMG_5478

This is the first of three connected blog posts on the appropriate uses and misuses of AI. In this blog post, I’ll look at “Artificial Ingestion.” (Trust me, it will tie back to another AI, Artificial Intelligence).

While ingestion, and therefore “Artificial Ingestion” is a complex topic, I begin with ingestion because it is a bit more divorced from thought itself. It is easier to think of digestion as separate from thinking; that is, to objectify it more than artificial intelligence because in writing about intelligence, it is necessary to use intelligence itself.

Do we eat to live or live to eat? There is little doubt that eating is necessary to the life of animals such as human beings. Our distant ancestors could have taken a greener and more photosynthetic path but instead, we have collectively decided to kill other organisms to garner our energy. Eating has a utilitarian purpose; indeed, it is a vital purpose. Without food, we eventually die. Moreover, the quality and quantity of the food we eat has a profound impact on our health and well-being. Many of us live in a paradoxical time when it comes to food. Our ancestors often struggled mightily to obtain enough food. Our brains are thus genetically “wired” to search for high sugar, high fat, high salt foods. Even though many of us “know” that we ingest too many calories and may have read and believe that too much salt and sugar are bad for us, it is difficult to overcome the “programming” of countless generations. We are also attracted to brightly colored food. In our past, these colors often signaled foods that were especially high in healthful phytochemicals.

Of course, in modern societies of the “Global North” our genetic predispositions toward high sugar, high fat, high salt, highly colored foods are manipulated by greedy corporate interests. Foods like crackers and chips that contain almost nothing of real value to the human diet are packaged to look like real foods. Beyond that, billions of dollars of advertising dollars are spent to convince us that if we buy and ingest these foods it will help us achieve other goals. For example, we are led to believe that a mother who gives her children “food” consisting of little other than sugar and food dye will be loved by her children and they will be excited and happy children. Children themselves are led to believe that ingesting such junk food will lead them to magical kingdoms. Adult males are led to believe that providing the right kinds of high fat, high salt chips will result in male bonding experiences. Adult males are also led to believe that the proper kinds of alcoholic beverages will result in the seduction of highly desirable looking mates.

Over time, the natural act of eating has been enhanced with rituals. Human societies came to hunt and gather (and later farm) cooperatively. In this way, much more food could be provided over a more continuous basis. Rather than fight each other over food, we sit down in a “civilized” manner and enjoy food together. Some people, through a combination of natural talent and training become experts in the preparation of foods. We have developed instruments such as chopsticks, spoons, knives and forks to help us eat foods. Most typically, various cultures have rituals and customs surrounding food. In many cases, these seem to be geared toward removing us psychologically from the life-giving functionality of food toward the communal enjoyment of food. For example, in my culture, we wait to eat until everyone is served. We eat at a “reasonable” pace rather than gobbling everything down as quickly as possible (before others at the table can snatch our portion). If there are ten people at the table and eleven delicious deserts, people turn many social summersaults in order to avoid taking the last one.

For much of our history, food was confined to what was available in the local region and season. Now, many people, but by no means all, are well off enough to buy foods at any season that originally were grown all over the world. When I was a child, very few Americans had even tried sushi, for example, and the very idea of eating raw fish turned stomachs. At this point, however, many Americans have tried it and most who have enjoy it. Similarly, other cuisines such as Indian and Middle Eastern have spread throughout the world in ways that would have been impossible without modern transportation, refrigeration, and modern training with cookbooks, translations, and videos supplementing face to face apprenticeships.

Some of these trends have enabled some people to enjoy foods of high quality and variety. We support many more people on the planet than would have been possible through hunting and gathering. These “advances” are not without costs. First, there are more people starving in today’s world than even existed on the planet 250,000 years ago. So, these benefits are very unevenly distributed. Second, while fine and delicious foods are available to many, the typical diet of many is primarily based on highly processed grains, soybeans, fat, refined sugar, salt and additives. These “foods” contain calories that allow life to continue; however, they lack many naturally occurring substances that help provide for optimal health. As mentioned, these foods are made “palatable” in the cheapest possible way and then advertised to death to help fool people into thinking they are eating well. In many cases, even “fresh” foods are genetically modified through breeding or via genetic engineering to provide foods that are optimized for cheap production and distribution rather than taste. Anyone who has grown their own tomatoes, for example, can readily appreciate that home grown “heirloom” tomatoes are far tastier than what is available in many supermarkets. While home farmers and small farmers have little in the way of government support, at least in the USA, mega-farming corporations are given huge subsidies to provide vast quantities of poor quality calories. As a consequence, low income people can generally not even afford good quality fresh fruits and vegetables and instead are forced through artificially cheap prices to feed their families with brightly packaged but essentially empty calories.

While some people enjoy some of the best food that ever existed, others have very mediocre food and still others have little food of any kind. What comes next? On the one hand, there is a move toward ever more efficient means of production and distribution of food. The food of humans has always been of interest to a large variety of other animals including rats, mice, deer, rabbits, birds, and insects. Insect pests are particularly difficult to deal with. In response, and in order to keep more of the food for “ourselves”, we have largely decided it is worth the tradeoff to poison our food supply. We use poisons that are designed to kill off insect pests but not kill us off, at least not immediately. I grow a little of my own food and some of that food gets eaten by insects, rabbits, and birds. Personally, I cannot see putting poison on my food supply in order to keep pests from having a share. However, I am lucky. I do not require 100% of my crop in order to stay alive nor to pay off the bank loan by selling it all. Because I grow a wide variety of foods in a relatively small space, there is a lively ecosystem and I don’t typically get everything destroyed by pests. Farmers who grow huge fields of corn, however, can be in a completely different situation and a lot of a crop can fall prey to pests. If they have used pesticides in the past, this is particularly true because they have probably poisoned the natural predators of those pests. At the same time, the pests themselves continue to evolve to be resistant to the poisons. In this way, chemical companies perpetuate a vicious circle in which more and more poison is needed to keep the crops viable. Luckily for the chemical companies, the long-term impact of these poisons on the humans who consume them is difficult to prove in courts of law.

There are movements such as “slow food” and eating locally grown food and urban gardens which are counter-trends, but by and large, our society of specialization has moved to more “efficient” production and distribution of food. More people eat out a higher percentage of the time and much of that “eating out” is at “fast food” restaurants. People grab a sandwich or a bagel or a burger and fries for a “quick fix” for their hunger in order to “save time” for “more productive” pursuits. Some of these “more productive” pursuits include being a doctor to cure diseases that come about in part from people eating junky food and spending most of their waking hours commuting, working at a desk or watching TV. Other “more productive” pursuits include being a lawyer and suing doctors and chemical companies for diseases. Yet other “more productive pursuits” include making money by pushing around little pieces of other people’s money. Still other “more productive pursuits” include making and distributing drugs to help people cope with lives where they spend all their time in “more productive pursuits.”

Do we live to eat or eat to live? Well, it is a little of both. But we seem to have painted ourselves into a corner where most people most of the time have forgone the pleasure of eating that is possible in order to eat more “efficiently” so that we can spend more time making more money. We do this in order to…? What is the end game here?

One can imagine a society in which eating itself becomes a completely irrelevant activity for the vast majority of people. Food that requires chewing takes more time so let’s replace chewing with artificial chewing. Using a blender allows food with texture to be quickly turned to a liquid that can be ingested in the minimum necessary time. One extreme science fiction scenario was depicted in the movie “Soylent Green” which, as it turns out, is made from the bodies of people killed to make room for more people. The movie is set in 2022 (not that far away) and was released in 1973. Today, in 2016, there exists a food called “soylent” (https://en.wikipedia.org/wiki/Soylent_(food)) whose inventor, Rob Rhinehart took the name from the movie. It is not made from human remains but the purpose is to provide an “efficient” solution to the Omnivore’s Dilemma (Michael Pollan). More efficient than smoothies, shakes, and soylent are feeding tubes.

Of course, there are medical conditions where feeding tubes are necessary as a replacement or supplement to ordinary eating as is being “fed” via an IV. But is this really where humanity in general needs to be headed? Is eating to be replaced with “Artificial Ingestion” because it is more efficient? We wouldn’t have to “waste our time” and “waste our energy” shopping, choosing, preparing, chewing, etc. if we could simply have all our nutritional needs met via an IV or feeding tube. With enough people opting in to this option, I am sure industrial research could provide ever less invasive and more mobile forms of IV and tube feeding. At last, humanity could be freed from the onerous task of ingestion, all of which could be replaced by “Artificial Ingestion.” The dollars saved could be put toward some more worthy purpose; for example, making a very few people very very rich.

There are, of course, a few problematic issues. For one thing, despite years of research, we are still discovering nutrients and their impacts. Any attempt to completely replace food with a uniform liquid supplement would almost certainly leave out some vital, but as yet undiscovered ingredients. But a more fundamental question is to what end would we undertake this endeavor in the first place? What if the purpose of life is not, after all, to accomplish everything “more efficiently” but rather, what if the purpose of life is to live it and enjoy it? What then?

Author’s Page on Amazon

Turing’s Nightmares

← Older posts

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • America
  • apocalypse
  • COVID-19
  • creativity
  • driverless cars
  • family
  • fiction
  • health
  • management
  • nature
  • poetry
  • politics
  • psychology
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • Veritas
  • Walkabout Diaries

Meta

  • Register
  • Log in

Blog at WordPress.com.

  • Follow Following
    • petersironwood
    • Join 645 other followers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...