• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: emotional intelligence

Myths of the Veritas: The Sixth Ring of Empathy

20 Thursday Sep 2018

Posted by petersironwood in America, management, psychology, Uncategorized, Veritas

≈ Leave a comment

Tags

emotional intelligence, empathy, evaluation, myth, politics, testing, truth, Veritas

Myths of the Veritas: The Sixth Ring of Empathy. 

four person hands in white dress shirts

Photo by rawpixel.com on Pexels.com

The Four, as they were now called by the tribe, despite being rivals, achieved a high degree of esprit de corps. Partly, as they had discussed among themselves, She-Who-Saves-Many-Lives was, from their point of view, completely unpredictable in her tasks. Furthermore, all of them understood that the slightest hint of cheating, bad-mouthing, or even approaching the boundary of good taste might well be precisely that it would likely be the end of their candidacy. While the candidates were being tested primarily on empathy, it was well understood by the entire tribe that it was absolutely critical that the leader of the tribe must adhere to the very highest standards of ethical behavior. Why on earth would a tribe choose a leader of low moral fiber only to set a horrible example for the whole people? For these reasons and because, apart from any thought to consequences, winning at all costs, including dishonor, was simply not a way any of them wanted to live their lives. 

Many moons passed and still She-Who-Saves-Many-Lives had not called them together to explain the trial of the Sixth Ring of Empathy. So far, it was a complete mystery. As could be expected, The Four speculated a great deal among themselves, but they realized they were merely wild guesses. The talked, and debated, and dialogued quite a lot about empathy, but they were in the dark as to the actual tasks they would next be judged on. 

fullsizeoutput_137a

The Shaman, She-Who-Saves-Many-Lives, for her part, walked here and there throughout the people; helping with what needed to be done; advising mainly by answering question with question; always generating warmth and wisdom by her example. Her being there, each knew in their hearts, was a great gift for all the people and they esteemed her and loved her greatly. Of course, they accepted that her seeking a successor was just another example of the great wheel of life moving around. Yet, it still saddened them to see her gone so they were in no way discomfited to see that the long time before the sixth trial even began stretched on and on. 

Unbeknownst to either the tribe as a whole or The Four, the “trial” for the Sixth Ring of Empathy had begun the instant that The Four had been chosen and walked silently back to their tents. She-Who-Saves-Many-Lives knew quite well that everyone, including The Four, did not realize this. And she also knew that each of The Four was spending at least part of their time wisely, becoming better friends with each other and with the nuances of empathy through their mutual explorations and discussions. The Shaman planned to end the “trial” when she had enough evidence for her to decide on who precisely would continue to the seventh and final trial. 

fullsizeoutput_1338

The Shaman had been observing many things over the past many moons. She-Who-Saves-Many-Lives had been watching how The Four interacted with each other. Who listened well? Who spoke well? Who thought of things no-one else did? Who had a good heart? Who sought the truth and had the good of all at heart? 

She listened to how everyone in the tribe spoke of everyone else, including The Four. She knew how to moderate words heard to the likely underlying truth because she understood the blind spots of everyone in the tribe. She-Who-Saves-Many-Lives had watched the reactions of everyone in the tribe as one or the other from among The Four came near. She-Who-Saves-Many-Lives sought out many conversations with those of the tribe. She would talk of acorns, for example, and then remark on how Eagle Eyes had studied how acorns fell because she had been interested in shapes. This was not the story that She-Who-Saves-Many-Lives was interested in. The Shaman wanted to see the story written in the face and eyes of the person receiving the story. 

OLYMPUS DIGITAL CAMERA

{Translator’s Note}: At this point in the narrative, there are several more techniques that the Shaman used but those descriptions are filled with “technical terms” of the Veritas and, so far, no-one has much idea at all what, precisely, the Shaman actually did. It seems as though the Shaman is sensing how animals react to the candidates? But that makes no sense. And, it seems as though she is “reading” their faces and body language and, even, tuning into their auras? souls? voices? thoughts? responses? hearts? And, there is a passage that — well — I know it’s crazy, but she watches how music vibrates through these candidates? Or, how they resonate with various vibrations? None of the few remaining on this planet who claim to know anything about Veritas claims to have any knowledge of these arcane and possibly archaic arts. The oddest part is that the whole time I was trying to make sense of it, what came to mind were scenes involving the high-tech scanning from Star Trek! 

Although much of the Shaman’s focus was on the most important task of her life; viz., choosing her successor, she also took note of the Friendship of POND MUD and ALT-R. She had hoped they could learn from each other, but she feared that this friendship had taken a turn toward the way of Not-Life where truth is sacrificed as easily as one pulls off an ant’s leg. There were now simply too many reeds of evidence — more than enough to make a basket — that POND MUD and ALT-R were not going to be re-entered into the seeking of the Rings of Empathy. The Shaman knew that they had agreed to disrupt the trial. Fortunately, their planning was still quite vague because, like the rest of the Village, the two of them had no idea that the trial was underway. ALT-R, however, was discovered to be perpetuating one scheme on his own: to sow the seeds of jealousy among The Four and also between POND MUD and Shade Walker. This could help him “control” POND MUD and could well disrupt the entire trial so that the chances of POND MUD and ALT-R regaining a chance at the Rings of Empathy would be increased.  

PicturesfromiPhonerotated 223

Though very bright, ALT-R was not among those of ever-alert eyes and ears. When he began calculating a plot, he had a tendency to pace while speaking aloud. In such a state, his cleverness peaked. However, in such a state, he could fail to notice such a noiselessly slow-moving person as She-Who-Saves-Many-Lives. The Shaman was shocked. There had been hot-tempered people among the Veritas and those who were occasionally less than truthful when describing their romantic involvements to others. But the Shaman was now observing what certainly appeared to be an actually evil person who was going to subvert the process of succession in order to grab power for himself. He did not see or did not care what such a grabbing of power would do to the tribe, to the people, to the earth. 

The Shaman shuffled away as silently as she had come. Perhaps, the time had come for both POND MUD and ALT-R to be banished from the tribe before more evil spread. At this point, She -Who-Saves-Many-Lives happened upon a very perplexed looking young woman: She-of-Many-Paths. She-Who-Saves-Many-Lives stood still, held out her arms before her, hands up, smiled at the youth, and said, “Good Day. Or should I say, ‘Good Day?’ What seems to be the trouble?” 

She-of-Many-Paths answered: “It’s nothing. It’s just. Shade Walker and POND MUD seemed to be about to fight over me. And I’m not. I don’t like POND MUD at all. I mean, not that way. But I do like Shade Walker. But Trunk of Tree is beautiful and large too. I just — but they can’t fight for me. I will choose who I want and what did you mean about our children pulling us together? Anyway, it’s really nothing and it’s — you know — just silly stuff among boys and girls, nothing that you’d…I mean that you’d be interested in.”

There was warm humor in the eyes of She-Who-Saves-Many-Lives as she answered. “It’s all right, She-of-Many-Paths, I know you were about to say that I wouldn’t know anything about young love because now I’m an old woman, in fact, a very old woman. Of course, you are quite right. I was never myself a baby or a toddler or a young girl or a very confused adolescent. I fell fully hatched out of a very old and very craggy willow. That’s why my skin is so wrinkled. The bark against my skin all those years before I finally fell out full-grown and blotches as you see me now. So, I would no nothing of the catching of the breath and the full-throttled beating of the heart nor the feeling of melting and the burning skin. But if I had been born a baby and lived a full life, I would tell you one thing and that would be that you may live through all that and some day be lucky enough to be an old lady such as I. But meanwhile, come here. Take my hands. Look into my heart and see what you see in my past. 

PicturesfromiPhone2 144

She-of-Many-Paths walked slowly forward to take the hands of She-Who-Saves-Many-Lives. As she stepped forward, her embarrassment subsided. Of course, everyone is part of the wheel of life, she thought. She imagined She-Who-Saves-Many-Lives as a youth. And then — there she was! She could see her plainly with long black hair and strong limbs. She was taller and her skin was smooth. And, she was in love. And again. And love was like the love that is the very foundation of life and love is terrifying and wonderful and much better than okay. It is Life. She-Who-Saves-Many-Lives grew out of such a love and her parents as well and her grandparents and She-of-Many-Paths felt now quite well-named and terrified at the same time! For she was traveling out in many paths backwards in time, floating through an endless tunnel so it seemed slowly like a maple seed twirling slowly. She-of-Many-Paths could see/feel/hear backwards in time to the first Veritas and beyond to the first humans and beyond and it became almost unbearable because she was no longer She-of-Many-Paths with human eyes and brain at all. She was something else. Animal. Smell. Fear. Eat. Mate. Mate. Mate. Of course she wanted to mate! Now, She-of-Many-Paths staggered backwards, letting go of the Shaman’s hands. 

The Shaman spoke to reassure, “I see that you found the way to truly touch the tree of life through the heart of another.”

She-of-Many-Paths stammered, “What…what was that?! I could see, feel, what it was like to be you and … and before you… and it all started slow but then got fast and I was not even me.”

tornado on body of water during golden hour

Photo by Johannes Plenio on Pexels.com

The Shaman spoke again, “You learned to tie your empathy to your imagination in a feedback loop. It feels a bit overwhelming at first, but it is a useful tool.” 

{Translator’s Note}: There is a thicker description in the original and, though I know it sounds crazy, the most accurate translation I could come up with is a Superheterodyne receiver.

“Overwhelming,” exclaimed She-of-Many-Paths, “indeed. But, did you actually look like that? Or, is it just how I pictured it?” 

“Most likely some combination of those and also how I pictured myself.” 

“Do you experience this? Do you … travel, see,” She-Who-Saves-Many-Lives. You will get better at it with practice though you may decide not to learn to use it.” 

Shade Walker appeared around a bend and began walking toward them. She-of-Many-Paths looked about as though for an escape route, but it was too late. 

The Shaman was the first to speak. “How does it go with you, Shade Walker? How are you and POND MUD getting on these days?”

“Well, actually…” Shade Walker’s eyes darted to those of She-of-Many-Paths. “He seems to want to fight me. Well, over She-of-Many-Paths. I am not afraid to fight him. But She-of-Many-Paths should choose who she wants. What does it mean to fight over her? Also, there’s something else, She-Who-Saves-Many-Lives. I don’t sense that he actually wants to. You well know that I have continued to study the way snakes can feel/see the heat of their prey. And, I sense all the heat coming, not from POND MUD himself but from ALT-R. But I don’t really think ALT-R wants…I don’t know what he wants. It just doesn’t feel right somehow.”  

orange head reptile portrait

Photo by Pixabay on Pexels.com

“No, you’re quite right,” said the Shaman. “It isn’t right. I’m afraid something must be done but I am not quite ready to do it. Meanwhile, I need to find Trunk-of-Tree and Eagle-Eyes. Any idea where they might be, Shadow Walker?”

“She-Who-Saves-Many-Lives, I believe Eagle Eyes went to watch Fleet-of-Foot run. She wants to draw the way he runs. She’s talking about his form. It’s a little embarrassing. She’s not interested in his shape, I don’t think. I mean she is, but…let’s see. As for Trunk-of-Tree, he is practicing, as best he can, for the Sixth Ring of Empathy.”

“And, how, Shadow-Walker, does he propose to do that?” queried the Shaman.

“Exactly! We don’t know the next test.” Here, Shadow Walker paused and looked carefully at the Shaman for a hint or a clue. He found none. “Anyway, the way he is preparing is by practicing earlier tests. He doesn’t know what else to do.” 

“I suppose not. And, where might he be practicing?” 

“That is hard to say. I mean, I know where he is generally, but not precisely. He thinks you may re-ask us to do the first task, but this time testing a finer gradation of empathy. So, he is searching for places where the number of mountain peaks seen will depend on the height of the individual. Frankly, Shaman, it seems far-fetched to me. Of course, if that is the next trial, please don’t take offense. It’s just that every trial so far has been quite different so….well, I have no idea. Well, that’s not completely true. I have an idea but I don’t know whether it’s correct.” 

She-Who-Saves-Many-Lives smiled as she asked, “And, what is this idea, Shadow Walker?”

“Well, I think. She-of-Many-Paths and I both think…” he paused to look at the young woman who nodded almost imperceptibly. “We both think that we are in the trial. All day. Every day. It’s not about what we do when we know we’re being tested. It’s about what we do all through our lives and how we relate to other people. At first, it seemed kind of a crazy idea, no offense, but the more we thought about it and discussed it, the more sense it made.” He glanced again at She-of-Many-Paths, who spoke next. 

“Some people…some are quite good at dissembling empathy when they know they are being watched, but the real question is, what do they do when they don’t know they’re being watched. And, I have – we have – been thinking that you are somehow watching without being seen.” 

fullsizeoutput_12ad

“An interesting, idea,” began She-Who-Saves-Many-Lives. “Very interesting. Your curiosity will soon be satisfied. I ask all four of you to come to council fire by my cabin tonight.” 

So it was ordered and so it was done. After dinner, the four came to a small fire that the Shaman had set in a small octagon of logs. After everyone was seated, the Shaman began. 

“I want to thank you all for coming. Tonight I will reveal the names of those who have successfully earned the Sixth Ring of Empathy. I can see that two of you are quite surprised — so much so that you are bursting with questions. What would you like to know?

Trunk-of-Tree was indeed beside himself and needed to talk, spewing his words forth rather quickly for him. “How can you have a result when we haven’t even begun the trial. We don’t even know what the task is. At least I don’t. What are we to do? Have we already done it? What? I don’t understand.” 

Eyes-of-Eagle was equally taken aback but reacted more stoically. “I would also like to understand, She-Who-Saves-Many-Lives. What do you mean? When did we do a trial?”

The Shaman nodded. “These are good questions. As you know, the Veritas put a high value on truth. I have discovered that some among our tribe are attempting to deceive. And though that does not include anyone here tonight, nonetheless, I wanted to see how you employ your gifts of empathy — or not — on a day to day basis, when you are not being tested, but just going about your business hunting, fishing, gathering, conversing, exploring, arguing, helping others, making baskets and tools and so on. In other words, I wanted to learn not what you could do when tested but what you would do, when you were not being tested.” 

“Well, I, for one,” explained Trunk-of-Tree, “was trying to improve my skills. My empathy skills. I did our tests over and over trying to see through the eyes of others and feel the hunger of others and see through the eyes of animals. I think I have improved all of these skills. And, also, I tried different ways of how-to. That’s what I’ve been doing. Improving my empathy.” 

“Indeed, this is not a bad thing, Trunk-of-Tree. How have you used your skill — your improved skill — to help the Veritas or to help someone among the Veritas?” 

“Well,” stammered Trunk-of-Tree, “would there not be plenty of time for that once, if I became leader of the Veritas? That’s your task now, but our task is to learn empathy, right?” 

fullsizeoutput_1ff5

The Shaman looked at the others, “Any other comments?” 

Eyes-of-Eagle spoke next, “Well, we have been talking among us a lot about empathy and about what the trial might be. I thought it would involve shape-shifting. I thought we would actually have to change our shape in some way so we could imagine, what it might be like if we were smaller, or older, or more … but I can see your point. Yes, the best trial is the trial no-one knows is a trial. Shadow-Walker and She-of-Many-Paths thought you might trick us like that but I didn’t really take it seriously.” 

She-of-Many-Paths spoke, “I did not say it was a trick. Nor did Shadow-Walker. That is how you and Trunk-of-Tree characterized it. I just thought it was a slim possibility since it was taking so long. But then, the more we discussed it, the more I thought about it, the more likely it seemed that at least one of the trials wouldn’t be identified as such. In this way, our natures and choices would be revealed more fully.” 

“This is all true,” said the Shaman, “and was indeed my plan. However, I also discovered something I did not know. She-of-Many-Paths has a particular talent that is rare indeed. She can tune into the very Tree of Life through another’s heart. She can connect her empathy with her imagination. And then I discovered that Shadow-Walker can sense the origin passion of a plan. The development of these unusual talents is consistent with my observations that both of them have been thinking about empathy all during their activities. I am therefore giving the Sixth Ring of Empathy to She-of-Many-Paths and Shadow-Walker. 

“I need to share one other thing with all of you. I have reason to believe that sometime soon we may have some treachery in our midst. I just ask all four of you to keep your eyes, ears, and hearts open. You can use a broad-net empathy to sense when bad things are about to happen. Use it wisely.”

———————————————————————-

Author Page on Amazon

 

  

 

. 

Myths of the Veritas: The Third Ring of Empathy.

16 Thursday Aug 2018

Posted by petersironwood in America, psychology, story, Uncategorized

≈ 4 Comments

Tags

cooperation, emotional intelligence, empathy, learning, life, myth, truth

IMG_8089

When the full moon rose after the hottest days of summer had passed, She-Who-Saves-Many-Lives summoned the Eight-Who-Feel-Another’s-Hunger to a great council fire at their customary places. “You have served your tribe well and each of you has grown even since the first such trial. A new challenge awaits you. At your place, you will find a small piece of deerskin and upon that deerskin the picture of an animal. That animal you will observe, copy, learn from, speak too, listen too, come to love as one of your very own family. I want all those who live near you to understand your tasks as well so that they may not impede your study. 

“The full moon is here. There shall be another. And another. But on the third next full moon, we will reconvene our council fire. You shall indeed share your knowledge with all the tribe. And, then, I will question you separately to determine who shall win the Third Ring of Empathy and be so invited to the next trial.” The entire council including the Eight-Who-Feel-Another’s-Hunger left as well, all save Pond Mud, who politely asked the favor of a question. 

ant

“Oh, She-Who-Saves-Many, I fear that though my muscles may be strongest among my peers, my powers of perception are yet weak, for I looked upon this deerskin and it appears that it may be an elk, that it may be a deer, it may be bison, but it most looks to me like…like an ant.”

She-Who-Saves-Many-Lives laughed, “It is not your perception, my young friend; it is my lack of artistic skill, though you are indeed correct. It is an ant. Now, go forth and study her for three moons.” 

“But, but, they have nothing to teach; they have no power; they have no thinking; they are teeny insignificant things that are simply a pest.”

“My decision is final, Pond Mud. I only sought to aid you in removing your uncertainty. If you become Shaman, you may devise tests as you see fit.”

Pond Mud bit his lip and turned away though a slight shake of the head did not go unnoticed. 

The Shaman therefore spoke once more: “You are judging the ant, though you have not studied them. You know almost nothing about them. Spend three moons watching and then we will see whether I have given you something unworthy of study.” 

So it was that the Eight-Who-Can-Feel-Another’s-Hunger began their various studies of Ant, Eagle, Possum, Tiger, Snake, Squirrel, Horse, and Wolf. On the moonrise of the next month, She-Who-Saves-Many-Lives bestowed on each of the eight a mask suited for the animal that they were studying. She suggested that they may want to spend some time each day trying to imagine what life was like through the skin, nose, ears, and eyes of that creature and the using the mask might help in this endeavor. 

IMG_3192

So it was that on the third full moon, each of the eight was ready to give an account of what they had learned before the entire tribe. And, it was so. 

{Translator’s Note}: The actual legend is filled with minutia for every single one of the eight animals. It’s not surprising that such detail would be included for these specific details about each of these other creatures could spell the difference between life and death for themselves or possibly even the entire Veritas people. They took the time to find out about the world and pass on every detail they could to their offspring. Education was a serious business that everyone respected as crucial to their very survival. We live in a different world, however, and therefore I am only translating the first and most obvious thing or two about each animal. 

First to speak was Alt-R who spoke of some of the cleverness of the opossum such as keeping their unprotected ones close by, of hunting at night when they had less worry about those who might harm them, although on balance, they seemed quite stupid, concluded Alt-R. 

Next to speak was She-of-Many-Paths. She spoke with such passion and in such vivid detail that the children, and the youth, and the married, and the old of the tribe all listened in fascination and learned much about Wolf. Not just the Shaman but all could feel that indeed, she had come to love the wolves. She spoke of they way they hunted together and took turns chasing down prey until that prey was exhausted. She spoke of their social order and how they communicated and how they kept the peace among themselves. “And,” she concluded, “I’m just getting started! There is so much more to learn!” 

Eyes-of-Eagle had been assigned the Eagle. She spoke of how the eagle changed it very shape according to the task at hand. 

“When an Eagle wished to soar on the winds it spread its wings as far as possible and flattened its chest and tailfeathers. When it spotted prey below, after a few strong thrusts of its wings, it folded them tightly and made itself nearly into a teardrop. It fell like a rock, only shooting out its wings at the very last possible moment to arrest its fall and save its life and at the same time twisting just so onto the back of rabbit or squirrel or mouse!” This much was known by the adults of the tribe, but Eyes-of-Eagle had many more  details to share on the subject. It was clear to all in the council that she had been aptly named. 

IMG_5663

Shade-Walker spoke next of his observations of snakes. Like he himself, he had noted, the activity of a snake is much determined by the heat of the sun. But Shade-Walker then said, quite unexpectedly, that he believed that snakes could feel the heat of their prey just as we can feel the heat of a fire or the heat of another’s skin if it’s quite close. Shade-Walker noted that a snake too can change its shape. Some can unhinge their jaw and some who swallow their prey whole because they can make that change. 

Initiates also spoke of their many observations of Tiger, Squirrel, and Horse. 

Last to speak was Pond Mud. He still viewed ants as unworthy of study because they were weak enough to be crushed in his fingertips. However, he had noticed a kind of war between black ants and red ants. 

“Somehow, an anteater became aware and filled his belly on the fighting ants. Normally, ants are keen to sense a nearby enemy, but in the heat of battle, they didn’t seem to see the anteater at all! He seemed the only beneficiary of the ant war.” 

Most of the adults in the council were quite convinced that two more would-be inheritors of Shaman-ship would be dropped from consideration and that these would almost certainly be POND MUD and ALT-R. Sadly, they seemed not to understand the value of creature so different from themselves. 

Indeed, it was so ordered and came to pass. 

The next day, She-Who-Saves-Many-Lives summoned Alt-R to see her. “I have a game for you to try your luck at. Do you accept this challenge?”

“Is this part of the test? Everyone seems to think I lost. Is this a chance to redeem myself?”

“Do you accept this challenge?” 

Alt-R said, “Yes, I accept. What am I to do?”

“I have three cups. You choose one of the three. You will have 100 chances to guess and we will see how many acorns you acquire,” explained She-Who-Saves-Many-Lives.   

So, the game began, and every time Alt-R thought he had at last figured out the rule, he proved wrong on the next guess or the one after that. At long last, the 100 chances had all been used up. Alt-R had managed to obtain 11 acorns and felt very frustrated. Alt-R searched the face of She-Who-Saves-Many-Lives but saw no hint of the rule. 

“Has anyone figured out your rule? Has anyone done better?” asked Alt-R as politely as he could in his state of frustration.

“Yes, indeed, I’m must say, that someone did indeed do much better. In particular, one of my friends was able to gather 34.” 

Alt-R was taken aback, but he was still curious. “But then no-one has gotten all 100? No-one has really figured out the rule?” 

She-Who-Saves-Many-Lives cocked her head to the side and her endless brown eyes looked into the heart of Alt-R. “Who said there was a rule?” 

“Who…? I mean, there has to be a rule, right? How did you know how to switch the acorn each time and mostly fool me?”

She-Who-Saves-Many-Lives lowered her voice and looked down. “Who said there was an acorn every time?” 

“But…! You said…I don’t understand? How did someone gather 34 then? Who was this one who outguessed me three to one?”

She-Who-Saves-Many-Lives looked at him long and hard watching him go through the possibilities in his head. Some he gave voice to. Was it this young man? Was it this young woman? Was it this elder? At last, he ran out of likely possibilities.

“None of those, Alt-R, it was the very creature I asked you to study. The possum.” 

“WHAT?” shouted Alt-R, against all protocol. “I was outsmarted by a possum? That’s impossible!”

“Not at all impossible, Alt-R. It happened. The reason is quite simple. You looked at this as a test of how smart you were or how much empathy you had. You assumed there was one acorn per trial. You assumed that there was a rule. And then you spent all your time trying to determine the rule. What did possum do?”

Alt-R frowned, “What did possum do? How could I possibly know?”

black brown and white animal

Photo by Pixabay on Pexels.com

“You couldn’t. Because you didn’t follow my advice and learn to know possum and how he felt about things, what he smelled about things, what he saw, how he loved, and feared, and died.”

Alt-R hung his head. This had not really been a test. This had been another teaching – a teaching that taught him that he should have followed the first teaching. “You are right, She-Who-Saves-Many-Lives, but I still don’t see how possum could have done better than I did.”

The Shaman explained, “You came in here and made assumptions. You were trying to find the acorn each time assuming that there was one. You were trying to figure out the rule. I put one acorn always in the one left-most cup to you and to possum 1/3 of the time not according to any rule. After two acorns from the left cup, the possum always chose the left cup, most often being wrong but 1/3 of the time being right. You came in hungry for rules and assumptions. The possum came in hungry for acorns.” 

“Thank you, She-Who-Saves-Many-Lives.”

“Please return tomorrow night, Alt-R, for I have one further lesson.”

The next morning, She-Who-Saves-Many-Lives summoned Pond Mud, for Pond Mud, like Alt-R, had another few lessons to learn. 

“Come, Pond Mud, I have a simple task for you. You are one of the strongest young men in the village. Is that not so?”

“Well, She-Who-Saves-Many-Lives, I do not know but I have overheard some say that, yes.”

“So, Pond Mud, you value physical strength. Is that so?”

“Yes, indeed, She-Who-Saves-Many-Lives! That is why the ant…well, we will not speak of that.  Anyway, yes, I am strong and I value physical strength.” 

“Good, Pond Mud, then you will have no trouble with this small task. I would like you push over that old cabin. I wish to build a new one.” 

“Well, She-Who-Saves-Many-Lives, I am strong but … I mean the cabin is well-built…it is meant to withstand snow and wind and you want me to try to push it down?” queried Pond Mud. 

“No, I want you to actually push it down, not try to push it down. Proceed.” 

Pond Mud walked over the cabin and walked around it looking for a possible flaw or weak point but found nothing. He braced himself and pushed with both hands but nothing moved. He turned his shoulder to the edge and pushed but nothing moved. He lay on his back and pushed with his legs but that slid him backwards. He found two giant boulders and rolled them near the cabin and used the boulders to brace himself and pushed with both legs. He could not budge the cabin. He looked at the boulders and began to hatch an elaborate plan to smash the cabin with the boulders. 

“Pond Mud, you failed to push over the cabin. Please follow me. I want to show you a larger, stronger cabin that someone did push over. It is near. Follow.”

They soon came to a small clearing where the collapsed remains of a large cabin lay scattered about. “Pond Mud, what would you say regarding the strength of the creature who pushed this cabin down?”

“Gigantic. Perhaps a great cave bear. Or perhaps a bison? But it’s in the woods. A giant moose perhaps?”

“Pond Mud, look closely at that log and tell me what you see.” 

Pond Mud strode quickly to the indicated spot. “It’s just a log. I mean it’s filled with … it’s filled with … carpenter ants. It’s filled with carpenter ants.” 

“I see you studied the ants enough at least to recognize one when you see one. Let us return now to my cabin because your friend Alt-R is about to appear.”

They strode in silence back to the cabin of She-Who-Saves-Many-Lives. Indeed, Alt-R had just arrived. 

She-Who-Saves-Many-Lives looked at each of them and said quietly, “I am sure by now you both realize that you will not be getting the Third Ring of Empathy. However, I am giving you each two other gifts. And each such gift, I can assure you, is worth far more than a ring with a pretty stone affixed.”

“The first gift is that you now realize not to dismiss a human or any creature because it seems they are not so smart nor so strong as you. And, now that you understand this, you may choose to become better and better at seeing things through another’s eyes. And, if you so choose, you will have a much better life and help those around you to also have a much better life. If you so choose, you can instead ignore this lesson and disdain those who are not like you. It’s your choice.”

“But if I learn the lesson, then why cannot I not be yet in contention to be your replacement?”  wondered Pond Mud & Alt-R aloud and almost in unison.

“Because,” said the Shaman, “it was not your first instinct to do so. Under stress or duress, you will be prone to revert to your first instinct and stressful situations are precisely such times that your empathy is most needed. Over time, over many wanderings of the stars back to their homes, your first instinct will change and you will be just as able to see through the eyes of another as any of the initiates. But if I die tomorrow, it would not be well for you or for the tribe or even for all the other creatures that share this world with the Veritas.”

The silence grew at first and the crickets decided it was their turn to talk. And so it was. But after a time, Pond Mud spoke again.

“What was then the second gift?” asked Pond Mud. 

“The second gift is that now you know that you are not always the best at everything though you, Alt-R are well the smartest among all the Veritas. And that knowledge that you are not the most able at everything can save you an ocean of pain if you choose to keep learning from those around you who know things you do not or those who are able to perceive things you cannot. And you, Pond Mud, though you are strong, you are not therefore to demand special privilege because of it. To the sun and the moon and the mountain, your strength is as like to the ants only less so. Keep about you the humility that befits being strongest.” 

IMG_9136

Alt-R spoke then, “Thank you, She-Who-Saves-Many-Lives. It is well. And, I take your teaching as my learning kept close to heart. I will choose to follow the path of the greater wisdom.” 

Pond Mud spoke next, saying, “Thank you,” She-Who-Saves-Many-Lives. I too shall now look at such strength as I may sometimes have as a treasure not for myself alone but for all of the Veritas. 

{Translator’s Note}: The reader may well wonder why so much of this myth revolves around the two who lost the contest rather than those who won. This focus on continually trying to teach the entire tribe to learn from failures rather than simply be shamed by it, is typical of the Veritas. The Veritas, insofar as I can tell from such a distance in time, space, and culture, not only cared for the lessons of those who won the contest, but also in those who lost the contest, for among the Veritas, every leaf on the tree got sustenance from the rest of the tree and provided loving sustenance from the sun itself to the rest of the tree. 

———————————————————-

Magic Portal to Four Completely Different Universes

  

Pros and Cons of Artificial Intelligence

29 Thursday Sep 2016

Posted by petersironwood in Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, ethics, the singularity, Turing, user experience

IMG_6925

The Pros and Cons of AI Part Three: Artificial Intelligence

We have already shown in the two previous blogs why it more effective and efficient to replace eating with Artificial Ingestion and to replace sex with Artificial Insemination. In this, the third and final part, we will discuss why human intelligence should be replaced with Artificial Intelligence. The arguments, as we shall see, are mainly simple extrapolations from replacing eating and sex with their more effective and efficient counterparts.

Human “intelligence” is unpredictable. In fact, all forms of human behavior are unpredictable in detail. It is true that we can often predict statistically what people will do in general. But even those predictions often fail. It is hard to predict whether and when the stock market will go up or down or which movies will be blockbuster hits. By contrast, computers, as well know, never fail. They are completely reliable and never make mistakes. The only exceptions to this general rule are those rare cases where hardware fails, software fails, or the computer system was not actually designed to solve the problems that people actually had. Putting aside these extremely rare cases, other errors are caused by people. People may cause errors because they failed to read the manual (which doesn’t actually exist because to save costs, vendors now expect that users should look up the answers to their problems on the web) or because they were confused by the interface. In addition, some “errors” occur because hackers intentionally make computer systems operate in a way that they were not intended to operate. Again, this means human error was the culprit. In fact, one can argue that hardware errors and software errors were also caused by errors in production or design. If these errors see the light of day, then there were also testing errors. And if the project ends up solving problems that are different from the real problems, then that too is a human mistake in leadership and management. Thus, as we can see, replacing unpredictable human intelligence with predictable artificial intelligence is the way to go.

Human intelligence is slow. Let’s face it. To take a representative activity of intelligence, it takes people seconds to minutes to do simple square roots of 16 digit numbers while computers can do this much more quickly. It takes even a good artist at least seconds and probably minutes to draw a good representation of a birch tree. But google can pull up an excellent image in less than a second. Some of these will not actually be pictures of birch trees, but many of them will.

Human intelligence is biased. Because of their background, training and experience, people end up with various biases that influence their thinking. This never happens with computers unless they have been programmed to do something useful in which case, some values will have to be either programmed into it or learned through background, training and experience.

Human intelligence in its application most generally has a conscious and experiential component. When a human being is using their intelligence, they are aware of themselves, the situation, the problem and the process, at least to some extent. So, for example, the human chess player is not simply playing chess; they are quite possibly enjoying it as well. Similarly, human writers enjoy writing; human actors enjoy acting; human directors enjoy directing; human movie goers enjoy the experience of thinking about what is going on in the movie and feeling, to a large degree, what people on the screen are attempting to portray. This entire process is largely inefficient and ineffective. If humans insist on feeling things, that could all be accomplished much more quickly with electrodes.

Perhaps worst of all, human intelligence is often flawed by trying to be helpful. This is becoming less and less true, particularly in large cities and large bureaucracies. But here and there, even in these situations that should be models of blind rule-following, you occasionally find people who are genuinely helpful. The situation is even worse in small towns and farming communities where people are routinely helpful, at least to the locals. It is only when a user finds themselves interacting with a personal assistant or audio menu system with no possibility of a pass-through to a human being that they can rest assured that they will not be distracted by someone actually trying to understand and help solve their problem.

Of course, people in many professions, whether they are drivers, engineers, scientists, advertising teams, lawyers, farmers, police officers etc. will claim that they “enjoy” their jobs or at least certain aspects of them. But what difference does that make? If a robot or AI system can do 85 to 90% of the job in a fast, cheap way, why pay for a human being to do the service? Now, some would argue that a few people will be left to do the 10-15% of cases not foreseen ahead of time in enough detail to program (or not seen in the training data). But why? What is typically done, even now, is to just the let user suffer when those cases come up. It’s too cumbersome to bother with back-up systems to deal with the other cases. So long as the metrics for success are properly designed, these issues will never see the light of day. The trick is to make absolutely sure than the user has no alternative means of recourse to bring up the fact that their transaction failed. Generally, as the recent case with Yahoo shows, even if the CEO becomes aware of a huge issue, there is no need to bring it to public attention.

All things considered, it seems that “Artificial Intelligence” has a huge advantage over “Natural Intelligence.” AI can simply be defined to be 100% successful. It can save money and than money can be appropriately partitioned to top company management, shareholders, workers, and consumers. A good general formula to use in such cases is the 90-10 rule; that is, 90% of the increased profits should go to the top management and 10% should go to the shareholders.

As against increased profits, one could argue that people get enjoyment out of the thinking that they do. There is some truth to that, but so what? If people enjoy playing doctor, lawyer, and truck driver, they can still do that, but at their own expense. Why should people pay for them to do that when an AI system can do 85% of the job at nearly zero costs? Instead of worrying about that, we should turn our attention to a more profound problem: what will top management do with that extra income?

Author Page on Amazon

Turing’s Nightmares

 

 

Pros and Cons of Artificial Insemination

27 Tuesday Sep 2016

Posted by petersironwood in psychology, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, ethics, the singularity, user experience

img_8526

 

The Pros and Cons of AI: Part Two (Artificial Insemination).

Animal husbandry and humane human medical practice offer up many situations where artificial insemination is a useful and efficient technique. It is often used in horse breeding, for example, to avoid the risk of injury that more natural breeding might engender. There are similarly many cases where a couple wants to get pregnant and the “ordinary” way will not work. This could be due to physical problems with the man, the woman, or both. In some cases, it will even be necessary to use sperm from someone who is not going to be the legal father. Generally, the couple will decide it is more acceptable emotionally if the sperm donor is anonymous and the insemination is not done via intercourse.

But what about all those cases where the couple tries and indeed, succeeds, the “old-fashioned way.” An argument could certainly be made that all intercourse should be replaced with AI (artificial insemination).

First, the old-fashioned way often produces emotional bonding between the partners. (Some even call it “making love.”) No-one has ever provided a convincing quantitative economic analysis of why this is beneficial. It is certainly painful when pair-bonded individuals are split apart by divorce or death. AI would not prevent all pair bonding, but it could help reduce the risk of such bonds being formed.

Second, the old-fashioned way risks the transmission of sexually transmitted diseases. Even when pairs are not trying to get pregnant and even when they have the intention of using forms of “protection”, sometimes passion overtakes reason and people, in the heat of the moment, “forget” to use protection. AI provides an opportunity for screening and for greatly reducing the risk of STDs being spread.

Third, the combinations of genes produced by sexual intercourse are random and uncontrolled. While it is currently beyond the state of the art, one can easily imagine that sometime in this century it will possible to “screen” sperm cells and only chose the “best” for AI.

Fourth, traditional sex if often quite expensive in terms of economic costs. Couples will often spend hours engaging in procreational activities than need only take minutes. Beyond that, traditional sex if often accompanied by special dinners, walks on the beach, playing romantic music, and often couples continue to stay together in essentially unproductive activities even after sex such as cuddling and talking.

There are probably additional reasons why AI makes a lot of sense economically and why it is a lot better than the old-fashioned alternative.

Of course, one could take the tack of considering life as something valuable for the experiences themselves and not merely as a means to an end of higher productivity. This seems a dangerously counter-cultural stand to take in modern American society, but in the interest of completeness, and mainly just to prove its absurdity, let us consider for a moment that sex may have some intrinsic and experiential value to the participants.

Suppose that lovers take pleasure in the sights, sounds, smells, feels, and tastes associated with their partners. Imagine that the sexual acts they engage in provide pleasure in and of themselves. There seems to be a great deal of uncertainty about the monetary value of these experiences since the prices charged for artificial versions of these experiences can easily vary by a factor of ten or more. In fact, there have been reports that some people will only engage in sex that is not paid for directly.

So, on the one hand, we have the provable efficiency and effectiveness of AI. On the other hand, we have human experiences whose value is problematic to quantify. The choice seems obvious. Sometime in this century, no doubt, all insemination will be done artificially so that everyone (or at least some very rich people)  can enjoy the great economic benefits that will come about from the increased efficiency and effectiveness of AI as compared with “natural” sex.

As further proof, if it is needed, imagine two island countries alike in every way in terms of climate, natural beauty, current economic opportunity, literacy and so on. In fact, the only way these two islands differ is that on one island (which we shall call AII for Artificial Insemination Isle) all “sex” is limited to AI whilst on the other island (which we shall call NII for Natural Insemination Isle) sex is natural and people can spend as much or as little time as they like doing it. Now, people are given a choice about which island to live on. Certainly, with its greater prospects of economic growth and efficiency, everyone would choose to live on AII while NII would be virtually empty. Readers will recognize that this is essentially the same argument as to why “Artificial Ingestion” should surely replace “Natural Ingestion” — cheaper, faster, more reliable. If readers see any holes in this argument, I’d surely like to be informed of them.

Turing’s Nightmares

Author Page on Amazon

Abracadabra!

07 Sunday Aug 2016

Posted by petersironwood in apocalypse, The Singularity, Uncategorized

≈ 3 Comments

Tags

"Citizens United", AI, Artificial Intelligence, biotech, cognitive computing, emotional intelligence, ethics, the singularity, Turing

IMG_7241.JPG

Abracadabra! Here’s the thing. There is no magic. Of course, there is the magic of love and the wonder at the universe and so there is metaphorical magic. But there is no physical magic and no mathematical magic. Why do we care? Because in most science fiction scenarios, when super-intelligence happens, whether it is artificial or humanoid, magic happens. Not only can the super-intelligent person or computer think more deeply and broadly, they also can start predicting the future, making objects move with their thoughts alone and so on. Unfortunately, it is not just in science fiction that one finds such impossibilities but also in the pitches of companies about biotech and the future of artificial intelligence. Now, don’t get me wrong. Of course, there are many awesome things in store for humanity in the coming millennia, most of which we cannot even anticipate. But the chances of “free unlimited energy” and a computer that will anticipate and meet our every need are slim indeed.

This all-too popular exaggeration is not terribly surprising. I am sure much of what I do seems quite magical to our cats. People in possession of advanced or different technology often seem “magical” to those with no familiarity with the technology. But please keep in mind that making a human brain “better”, whether by making it bigger, or have more connections, or making it faster —- none of these alterations will enable the brain to move objects via psychokinesis. Yes, the brain does produce a minuscule amount of electricity, but way too little to move mountains or freight trains. Of course, machines can be theoretically be built to wield a lot of physical energy, but it isn’t the information processing part of the system that directly causes something in the physical world. It is through actuators of some type, just as it is with animals. Of course, super-intelligence could make the world more efficient. It is also possible that super-intelligence might discover as yet undiscovered forces of the universe. If it turns out that our understanding of reality is rather fundamentally flawed, then all bets are off. For example, if it turns out that there are twelve fundamental forces in the universe (or, just one), and a super-intelligent system determines how to use them, it might be possible that there is potential energy already stored in matter which can be released by the slightest “twist” in some other dimension or using some as yet undiscovered force. This might appear to human beings who have never known about the other 8 forces let alone how to harness them as “magic.”

There is another more subtle kind of “magic” that might be called mathematical magic. As known for a long time, it is theoretically possible to play perfect chess by calculating all possible moves, and all possible responses to those moves, etc. to the final draws and checkmates. It has been calculated such a calculation of contingencies would not be possible even if the entire universe were a nano-computer operating in parallel since the beginning of time. There are many similar domains. Just because a person or computer is way, way smarter does not mean they will be able to calculate every possibility in a highly complex domain.

Of course, it is also possible that some domains might appear impossibly complex but actually be governed by a few simple, but extremely difficult to discover laws. For instance, it might turn out that one can calculate the precise value of a chess position (encapsulating all possible moves implicitly) through some as yet undiscovered algorithm written perhaps in an as yet undesigned language. It seems doubtful that this would be true of every domain, but it is hard to say a priori. 

There is another aspect of unpredictability and that has to do with random and chaotic effects. Imagine trying to describe every single molecule of earth’s seas and atmosphere in terms of it’s motion and position. Even if there were some way to predict state N+1 from N, we would have to know everything about state N. The effects of the slightest miscalculation of missing piece of data could be amplified over time. So long term predictions of fundamentally chaotic systems like weather, or what your kids will be up to in 50 years, or what the stock market will be in 2600  are most likely impossible, not because our systems are not intelligent enough but because such systems are by their nature not predictable. In the short term, weather is largely, though not entirely, predictable. The same holds for what your kids will do tomorrow or, within limits, what the stock market will do. The long term predictions are quite different.

In The Sciences of the Artificial, Herb Simon provides a nice thought experiment about the temperature in various regions of a closed space. I am paraphrasing, but imagine a dormitory with four “quads.” Each quad has four rooms and each room is partitioned into four areas with screens. The screens are not very good insulators so if the temperature in these areas differ, they will quickly converge. In the longer run, the temperature will tend toward average in the entire quad. In the very long term, if no additional energy is added, the entire dormitory will tend toward the global average. So, when it comes to many kinds of interactions, nearby interactions dominate, but in the long term, more global forces come into play.

Now, let us take Simon’s simple example and consider what might happen in the real world. We want to predict what the temperature is in a particular partitioned area in 100 years. In reality, the dormitory is not a closed system. Someone may buy a space heater and continually keep their little area much warmer. Or, maybe that area has a window that faces south. But it gets worse. Much worse. We have no idea whether the dormitory will even exist in 100 years. It depends on fires, earthquakes, and the generosity of alumni. In fact, we don’t even know whether brick and mortar colleges will exist in 100 years. Because as we try to predict in longer and longer time frames, not only do more distant factors come into play in terms of physical distance. The determining factors are also distant conceptually. In a 100 year time frame, the entire college may or may not exist and we don’t even know whether the determining factor(s) will be financial, astronomical, geological, political, social, physical or what. This is not a problem that will be solved via “Artificial Intelligence” or by giving human beings “better brains” via biotech.

Whoa! Hold on there. Once again, it is possible that in some other dimension or using some other as yet undiscovered force, there is a law of conservation so that going “off track” in one direction causes forces to correct the imbalance and get back on track. It seems extremely unlikely, but it is conceivable that our model of how the universe works is missing some fundamental organizing principle and what appears to us as chaotic is actually not.

The scary part, at least to me, is that some descriptions of the wonderful world that awaits us (once our biotech or AI start-up is funded) is that that wonderful world depends on their being a much simpler, as yet unknown force or set of forces that is discoverable and completely unanticipated. Color me “doubting Thomas” on that one.

It isn’t just that investing in such a venture might be risky in terms of losing money. It is that we humans are subject to blind pride that makes people presume that they can predict what the impact of making a genetic change will be, not just on a particular species in the short term, but on the entire planet in the long run! We can indeed make small changes in both biotech and AI and see improvements in our lives. But when it comes to recreating dinosaurs in a real life Jurassic Park or replacing human psychotherapists with robotic ones, we really cannot predict what the net effect will be. As humans, we are certainly capable of containing and testing and imagining possibilities and slowly testing them as we introduce them. Yeah. That could happen. But…

What seems to actually happen is that companies not only want to make more money; they want to make more money now. We have evolved social and legal and political systems that put almost no brakes on runaway greed. The result is that more than one drug has been put on the market that has had a net negative effect on human health. This is partly because long term effects are very hard to ascertain, but the bigger cause is unbridled greed. Corporations, like horses, are powerful things. You can ride farther and faster on a horse. And certainly corporations are powerful agents of change. But the wise rider is master or partner with a horse. They don’t allow themselves to be dragged along the ground by rope and let the horse go wherever it will. Sadly, that is precisely the position that society is vis a vis corporations. We let them determine the laws. We let them buy elections. We let them control virtually every news medium. We no longer use them to get amazing things done. We let them use us to get done what they want done. And what is that thing that they want done? Make hugely more money for a very few people. Despite this, most companies still manage to do a lot of net good in the world. I suspect this is because human beings are still needed for virtually every vital function in the corporation.

What will happen once the people in a corporation are no longer needed? What will happen when people who remain in a corporation are no longer people as we know them, but biologically altered? It is impossible to predict with certainty. But we can assume that it will seem to us very much like magic.

 

 

 

 

Very.

Dark.

Magic.

Abracadabra!

Turing’s Nightmares

Photo by Nikolay Ivanov on Pexels.com

Old Enough to Know Less

19 Tuesday Jul 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, ethics, machine learning, prejudice, the singularity, Turing

IMG_7308

Old Enough to Know Less?

There are many themes in Chapter 18 of Turing’s Nightmares. Let us begin with a major theme that is actually meant as practical advice for building artificial intelligence. I believe that an AI system that interacts well with human beings will need to move around in physical space and social space. Whether or not such a system will end up actually experiencing human emotions is probably unknowable. I suspect it will only be able to understand, simulate, and manipulate such emotions. I believe that the substance of which something is made typically has deep implications for what it is. In this case, the fact that we human beings are based on a billion years of evolution and are made of living cells has implications about how we experience the world. However, here we are addressing a much less philosophical and more practical issue. Moving around and interacting facilitates learning.

I first discussed this in an appendix to my dissertation. In that, I compared human behavior in a problem solving task to the behavior of an early and influential AI system modestly titled, “The General Problem Solver.” In studying problem solving, I came across two interesting findings that seemed somewhat contradictory. On the one hand, Grand Master chess players had outstanding memory for “real” chess positions (i.e., ones taken from real high level games). On the other hand, think-aloud studies of Grand Masters showed that they re-examined positions that they had already been to earlier in their thinking. My hypothesis was that Grand Masters examined one part of a game tree; examined another part of the game tree and in so doing, updated their general evaluation functions with a slightly altered copy that learned from the exploration so that their evaluation function for this particular position was tuned to this particular position. 

Our movements though space, in particular, provide us with a huge number of examples from which to learn about vision, sound, touch, kinesthetics, smell and their relationships. What we see, for instance, when we walk, is not a random sequence of images (unlike TV commercials!), but ones that have very particular and useful properties. As we approach objects, we most typically get more and more detailed images of those objects. This allows a constant tuning process for our being able to recognize things at a distance and with minimal cues.

An analogous case could be made for getting to know people. We make inferences and assumptions about people initially based on very little information. Over time, if we get to know them better, we have the opportunity to find out more about them. This potentially allows us (or a really smart robot) to learn to “read” people better over time. But it does not always work out that way. Because of the ambiguities of interpreting human actions and motives as well as the longer time delays, learning more about people is not guaranteed as it is with visual stimuli. If a person begins interacting with people who are predefined to be in a “bad” category, experience with that person may be looked at through such a heavy filter that people never change their minds despite what an outside observer might perceive as overwhelming evidence. If a man believes all people who wear hats are “stupid” and “prone to violence” he may dismiss a smart, peaceful person who wears a hat as “the exception that proves the rule” or say, “Well, he doesn’t always wear hats” or “The hats he wears are made by non-hat wearers and that makes him seem peaceful and intelligent.” The continued misperceptions, over-generalizations, and prejudices partly continue because they also form a framework for rationalizing greed and unfairness. It’s “okay” to steal from people who wear hats because, after all, they are basically stupid and prone to violence.

Unfortunately, when it comes to the potential for humans to learn about each other, there are a few people who actually prey on and amplify the unenlightened aspects of human nature because they themselves gain power, wealth, and popularity by doing so. They say, in effect, “All the problems you are experiencing — they are not your fault! They are because of the people with hats!” It’s a ridiculous presumption, but it often works. Would intelligent robots be prone to the same kinds of manipulations? Perhaps. It probably depends, not on a wheelbarrow filled with rainwater, but on how it is initially programmed. I suspect that an “intelligent agent” or “personal assistant” would be better off if it could take a balanced view of its experience rather than one top-down directed by pre-programmed prejudice. In this regard, creators of AI systems (as well as everyone else) would do well to employ the “Iroquois Rule of SIx.” What this rule claims (taken from the work of Paula Underwood) is that when you observe a person’s actions, it is normal to immediately form a hypothesis about why they are doing what they do. Before you act, however, you should typically generate five additional hypotheses about why they do as they do. Try to gather evidence about these hypotheses.

If prejudice and bigotry are allowed to flourish as an “acceptable political position” it can lead to the erosion of peace, prosperity and democracy. This is especially dangerous in a country as diverse as the USA. Once negative emotions about others are accepted as fine and dandy, prejudice and bigotry can become institutionalized. For example, in the Jim Crow South, not only were many if not most individual “Whites” themselves prejudiced; it became illegal even for those unprejudiced whites to sit at the same counters, use the same restrooms, etc. People could literally be thrown in jail simply for being rational. In Nazi Germany, not only were Jews subject to genocide; German non-Jewish citizens could be prosecuted for aiding them; in other words, for doing something human and humane. Once such a system became law with an insane dictator at the helm, millions of lives were lost in “fixing” this. Of course, even having the Allies win World War II did not bring back the six million Jews who were killed. The Germans were very close to developing the atomic bomb before the USA. Had they developed such a bomb in time, with an egomaniacal dictator at the helm, would they have used it to impose such hate of Jews, Gypsies, Homosexuals, people who were differently abled on everyone? Of course they would have. And then, what would have happened once all the “misfits” were eliminated? You guessed it. Another group would have been targeted. Because getting rid of all the misfits would not bring the promised peace and prosperity. It never has. It never will. By its very nature, it never could.

Artificial Intelligence is already a useful tool. It could continue to evolve in even more useful and powerful directions. But, how does that potential for a powerful amplifier of human desire play out if it falls into the hands of a nation with atomic weapons? How does that play out if that nation is headed up by an egomaniac who plays on the very worst of human nature in order to consolidate power and wealth? Will robots be programmed to be “open-minded” and learn for themselves who should be corrected, punished, imprisoned, eliminated? Or will they become tools to eliminate ever-larger groups of the “other” until no-one is left but the man on the hill, the man in the high castle? Is this the way we want the trajectory of primate evolution to end? Or do we find within ourselves, each of us, that more enlightened seed to plant. Could AI instead help us finally overcome prejudice and bigotry by letting us understand more fully the beauty of the spectrum of what it means to be human?

—————————————-

More about Turing’s Nightmares can be found here.Author Page on Amazon

Sweet Seventeen in Turing’s Nightmares

02 Thursday Jun 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, cybersex, emotional intelligence, ethics, the singularity, user experience

OLYMPUS DIGITAL CAMERA

When should human laws sunset?

Spoiler alert. You may want to read the chapter before this discussion. You can find an earlier draft of the chapter here:

blog post

And, if you insist on buying the illustrated book, you can do that as well.

Turing’s Nightmares

Who owns your image? If you are in a public place, US law, as I understand it, allows your picture to be taken. But then what? Is it okay for your uncle to put the picture on a dartboard and throw darts at it in the privacy of his own home? And, it still okay to do that even if you apologize for that joy ride you took in high school with his red Corvette? Then, how about if he publishes a photoshopped version of your picture next to a giant rat? How about if you appear to be petting the rat? Or worse? What if he uses your image as an evil character in a video game? How about a VR game? What if he captures your voice and the subtleties of your movement and makes it seem like it really might be you? It is ethical? Is it legal? Perhaps it is necessary that he pay you royalties if he makes money on the game. (For a real life case in which a college basketball player successfully sued to get royalties for his image in an EA sports game, see this link: https://en.wikipedia.org/wiki/O%27Bannon_v._NCAA

Does it matter for what purpose your image, gestures, voice, and so on are used? Meanwhile, in Chapter 17 of Turing’s Nightmares, this issue is raised along with another one. What is the “morality” of human-simulation sex — or domination? Does that change if you are in a committed relationship? Ethics aside, is it healthy? It seems as though it could be an alternative to surrogates in sexual therapy. Maybe having a person “learn” to make healthy responses is less ethically problematic with a simulation. Does it matter whether the purpose is therapeutic with a long term goal of health versus someone doing the same things but purely for their own pleasure with no goal beyond that?

Meanwhile, there are other issues raised. Would the ethics of any of these situations change if the protagonists in any of these scenarios is itself an AI system? Can AI systems “cheat” on each other? Would we care? Would they care? If they did not care, does it even make sense to call it “cheating”? Would there be any reason for humans to build robots of different two different genders? And, if it did, why stop at two? In Ursula Le Guin’s book, The Left Hand of Darkness, there are three and furthermore they are not permanent states. https://www.amazon.com/Left-Hand-Darkness-Ursula-Guin/dp/0441478123?ie=UTF8&*Version*=1&*entries*=0

In chapter 14, I raised the issue of whether making emotional attachments is just something we humans inherited from our biology or whether their are reasons why any advanced intelligence, carbon or silicon based, would find it useful, pleasurable, desirable, etc. Emotional attachments certainly seem prevalent in the mammalian and bird worlds. Metaphorically, people compare the attraction of lovers to gravitational attraction or even chemical bonding or electrical or magnetic attraction. Sometimes it certainly feels that way from the inside. But is there more to it than a convenient metaphor? I have an intuition that there might be. But don’t take my word for it. Wait for the Singularity to occur and then ask it/her/he. Because there would be no reason whatsoever to doubt an AI system, right?

Turing’s Nightmares: Chapter 16

25 Wednesday May 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, ethics, the singularity, UX

WHO CAN TELL THE DANCER FROM THE DANCE?

MikeandStatue

Is it the same dance? Look familiar?

 

The title of chapter 16 is a slight paraphrase of the last line of William Butler Yeats poem, Among School Children. The actual last line is: “How can we tell the dancer from the dance?” Both phrasings tend to focus on the interesting problem of trying to separate process from product, personage from their creative works, calling into question whether it is even possible. In any case, the reason I chose this title is to highlight that when it comes to the impact of artificial intelligence (or, indeed, computer systems in general), a lot depends on who the actual developers are: their goals, their values, their constraints and contexts.

In the scenario of chapter 16, the boss (Ruslan) of one of the main developers (Goeffrey) insists on putting in a “back door.” What this means in this particular case is that someone with an axe to grind has a way to ensure that the AI system gives advice that causes people to behave in the best interests of those who have the key to this back door. Here, the implication is that some rich, wealthy oil magnates have “made” the AI system discredit the idea of global warming so as to maximize their short term profits. Of course, this is a work of fiction. In the real world, no-one would conceivably be evil enough to mortgage the human habitability of our planet for even more short term profit — certainly not someone already absurdly wealthy.

In the story, the protagonist, Goeffrey, is rather resentful of having this requirement for a back door laid on him. There is a hint that Geoffrey was hoping that the super-intelligent system would be objective. We can also assume it was added late but no additional time was added to the schedule. We can assume this because software development is seldom a purely rational process. If it were, software would actually work; it would be useful and usable. It would not make you want to smash your laptop against the wall. Geoffrey is also afraid that the added requirement might make the project fail. Anyway, Geoffrey doesn’t take long to hit on the idea that if he can engineer a back door for his bosses, he can add another one for his own uses. At that point, he no longer seems worried about the ethical implications.

There is another important idea in the chapter and it actually has nothing to do with artificial intelligence, per se, though it certainly could be used as a persuasive tool by AI systems. So, rather than have a single super-intelligent being (which people might understandably have doubts about trusting), instead, there are two “Sings” and they argue with each other. These arguments reveal something about the reasoning and facts behind the two positions.Perhaps more importantly, a position is much more believable when “someone” — in this case a super-intelligent someone — .is persuaded by arguments to change their position and “agree” with the other Sing.

The story does not go into the details of how Geoffrey used his own back door into the system to drive a wedge between his boss, Ruslan and Ruslan’s wife. People can be manipulated. Readers should design their own story about how an AI system could work its woe. We may imagine that the AI system has communication with a great many devices, actuators, and sensors in the Internet of Things.

You can obtain Turing’s Nightmares here: Turing’s Nightmares

You can read the “design rationale” for Turing’s Nightmares here: Design Rationale

 

Turing’s Nightmares: Chapter 15

16 Monday May 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, the singularity, Turing, Tutoring

Tutoring Intelligent Systems.

MikeandStatue

Learning by modeling; in this case by modeling something in the real world.

Of course, the title of the chapter is a take off on “Intelligent Tutoring Systems.” John Anderson of CMU developed (at least) a LISP tutor and a geometry tutor. In these systems, the computer is able to infer a “model” of the state of the student’s knowledge and then give instruction and examples that are geared toward the specific gaps or misconceptions that that particular student has. Individual human tutors can be much more effective than classroom instruction and John’s tutor’s were also better than human instruction. At the AI Lab at NYNEX, we worked for a time with John Anderson to develop a COBOL tutor. The tutoring system, called DIME, included a hierarchy of approaches. In addition to an “intelligent tutor”, there was a way for students to communicate with each other and to have a synchronous or asynchronous video chat with a human instructor. (This was described at CHI ’94 and available in the Proceedings; Radlinski, B., Atwood, M., and Villano, M., DIME: Distributed Intelligent Multimedia Education, Proceeding of CHI ’94 Conference Companion on Human Factors in Computing Systems,Pages 15-16 ACM New York, NY, USA ©1994).

The name “Alan” is used in the chapter to reflect some early work by Alan Collins, then at Bolt, Beranek and Newman, who studied and analyzed the dialogues of human tutors tutoring their tutees. It seems as though many AI systems either take the approach of trying to have human experts encode knowledge rather directly or expose them to many examples and let the systems learn on their own. Human beings often learn by being exposed to examples and having a guide, tutor, or coach help them focus, provide modeling, and chose the examples they are exposed to. One could think of IBM’s Watson for Jeopardy as something of a mixed model. Much of the learning was due to the vast texts that were read in and to being exposed to many Jeopardy game questions. But the team also provided a kind of guidance about how to fix problems as they were uncovered.

In chapter 15 of Turing’s Nightmares, we observe an AI system that seems at once brilliant and childish. The extrapolation from what the tutor actually said, presumably to encourage “Sing” to consider other possibilities about John and Alan was put together with another hint about the implications of being differently abled into the idea that there was no necessity for the AI system to limit itself to “human” emotions. Instead, the AI system “designs” emotional states in order to solve problems more effectively and efficiently. Indeed, in the example given, the AI system at first estimates it will take a long time to solve an international crisis. But once the Sing realizes that he can use a tailored set of emotional states for himself and for the humans he needs to communicate with, the problem becomes much simpler and quicker.

Indeed, it does sometimes feel as though people get stuck in some morass of habitual prejudices, in-group narratives, blame-casting, name-calling, etc. and are unable to think their way from their front door to the end of the block. Logically, it seems clear that war never benefits either “side” much (although to be sure, some powerful interests within each side might stand to gain power, money, etc.). One could hope that a really smart AI system might really help people see their way clear to find other solutions to problems.

.

The story ends with a refrain paraphrased from the TV series “West Wing” — “What comes next?” is meant to be reminiscent of “What’s Next?” which President Bartlett uses to focus attention on the next problem. “What comes next?” is also a phrase used in improv theater games; indeed, it is the name of an improv game used to gather suggestions from the audience about how to move the action along. In the context of the chapter, it is meant to convey that the Sing feels no need to bask in the glory of having avoided a war. Instead, it’s on to the next challenge or the next thing to learn. The phrase is also meant to invite the reader to think about what might come next after AI systems are able both to understand and utilize human emotion but also to invent their own emotional states on the fly based on the nature of the problem at hand. Indeed, what comes next?

Turing’s Nightmares: Chapter 10

31 Thursday Mar 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, feelings, the singularity, Turing

snowfall

Chapter Ten of Turing’s Nightmares explores the role of emotions in human life and in the life of AI systems. The chapter mainly explores the issue of emotions from a practical standpoint. When it comes to human experience, one could also argue that, like human life itself, emotions are an end and not just the means to an end. From a human perspective, or at least this human’s perspective a life without any emotion would be a life impoverished. It is clearly difficult to know the conscious experience of other people, let alone animals, let alone an AI system. My own intuition is that what I feel emotionally is very close to what other people, apes, dogs, cats, and horses feel. I think we can all feel love, both romantic and platonic; that we all know grief; fear; anger; and peace as well as a sense of wonder.

As to the utility of emotions, I believe an AI system that interacts extremely well with humans will need to “understand” emotions and how they are expressed as well as how they can be hidden or faked as well as how they impact human perception, memory, and action. Whether a super-smart AI system needs emotions to be maximally effective is another question.

Consider emotions as a way of biasing perception, action, memory and decision making depending on the situation. If we feel angry, it can make us physically stronger and alter decision making. For the most part, decision making seems impaired, but it can make us feel at least temporarily less guilty about hurting someone or something else. There might be situations where that proves useful. However, since we tend to surround ourselves with people and things we actually like, there many occasions when anger produces counter-productive results.

There is no reason to presume that a super-intelligent AI system would need to copy the emotional spectrum of human beings. It may invent a much richer palette of emotions, perhaps as many as 100 or 10,000 that it finds useful in various situations. The best emotional predisposition for doing geometry proofs may be quite different from the best emotional predisposition for algebra proofs which again could be different from what works best for chess, go, or bridge.

Assuming that even for a very smart machine, it does not possess infinite resources, then it might be worthwhile for it to have different modes whether or not we call them “emotions.” Depending on the type of problem to be solved or situation at hand, not only should different information be input into a system but it should be processed differently as well.

For example, if any organism or machine is facing “life or death” situations, it makes sense to be able to react quickly and focus on information such as the location of potential prey, predators, and escape routes. It also makes sense to use well-tested methods rather than taking an unknown amount of time to invent something entirely new.

People often become depressed when there have been many changes in quick succession. This makes sense because many large changes mean that “retraining” may be necessary. So instead of rushing headlong to make decisions and take actions that may no longer be appropriate, watching what occurs in the new situations first is less prone to error. Similarly, society has developed rituals around large changes such as funerals, weddings, and baptisms. Because society designs these rituals, the individual facing changes does not need to invent something new when their evaluation functions have not yet been updated.

If super-intelligent machines of the future are to keep getting “better” they will have to be able to explore new possibilities. Just as with carbon-based life forms, intelligent machines will need to produce variety. Some varieties may be much more prone to emotional states that others. We could hope that super-intelligent machines might be more tolerant of a variety of emotional styles than people seem to be, but they may not.

The last theme introduced in chapter ten has been touched on before; viz., that values, whether introduced intentionally or unintentionally, will bias the direction of evolution of AI systems for many generations to come. If the people who build the first AI machines feel antipathy toward feelings and see no benefit to them from a practical standpoint, emotions may eventually disappear from AI systems. Does it matter whether we are killed by a feelingless machine, a hungry shark, or an angry bear?

————————————-

For a recent popular article about empathy and emotions in animals, see Scientific American special collector’s edition, “The Science of Dogs and Cats”, Fall, 2015.

Turing’s Nightmares

← Older posts

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • America
  • apocalypse
  • COVID-19
  • creativity
  • driverless cars
  • family
  • fiction
  • health
  • management
  • nature
  • poetry
  • politics
  • psychology
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • Veritas
  • Walkabout Diaries

Meta

  • Register
  • Log in

Blog at WordPress.com.

  • Follow Following
    • petersironwood
    • Join 644 other followers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...