• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Monthly Archives: August 2015

Turing’s Nightmares: An Ounce of Prevention

29 Saturday Aug 2015

Posted by petersironwood in Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cancer, cognitive computing, future, healthcare

“Jack, it’ll take an hour of your time and it can save your life. No more arguments!”

“Come on, Sally, I feel fine.”

Sally sighed. “Yeah, okay, but feeling fine does not necessarily mean you are fine. Don’t you remember Randy Pausch’s last lecture? He not only said he felt fine, he actually did a bunch of push-ups right in the middle of his talk!”

“Well, yes, but I’m not Randy Pausch and I don’t have cancer or anything else wrong. I feel fine.”

“The whole point of Advanced Diagnosis Via Intelligent Learning is to find likely issues before the person feels anything is wrong. Look, if you don’t want to listen to me, chat with S6. See what pearls of wisdom he might have.”

(“S6” was jokingly named for seven pioneers in AI: Simon, Slagle, Samuels, Selfridge, Searl, Schank and Solomonoff).

“OK, Sally, I do enjoy chatting with S6, but she’s not going to change my mind either.”

“S6! This is Jack. I was wondering whether you could explain the rationale for why you think I need to go to the Doctor.”

“Sure, Jack. Let me run a background job on that. Meanwhile, you know, I was just going over your media files. You sure had a cute dog when you were a kid! It’s name was ‘Miel’? That’s a funny name.”

“Yeah, it means “honey” in Portuguese. Miel’s fur shone like honey. A cocker spaniel.”

“What ever happened to him?”

“Well, he’s dead. Dogs don’t live that long. Why do you think I should go to the doctor?”

“Almost have that retrieved, Jack. Your dog died young though, right?”

“Yes, OK. I see where this is going. Yes, he died of cancer. Well, actually, the vet put him to sleep because it was too late to operate. I’m not sure we could have afforded an operation back then anyway.”

“Were you sad?”

“When my dog died? Of course! You must know that. Why are we having this conversation?”

“Oh, sorry. I am still learning about people’s emotions and was just wondering. I still have so much to learn really. It’s just that, if you were sad about your dog dying of cancer, it occurred to me that your daughter might be sad if you died, particularly if it was preventable. But that isn’t right. She wouldn’t care, I guess. So, I am trying to understand why she wouldn’t care.”

“Just tell me your reasoning. Did you use multiple regression or something to determine my odds are high?”

“I used something a little bit like multiple regression and a little bit like trees and a little bit like cluster analysis. I really take a lot of factors into account including but not limited to your heredity, your past diet, your exposure to EMF and radiation, your exposure to toxins, and most especially the variability in your immune system response over the last few weeks. That is probably caused by an arms race between your immune system trying to kill off the cancer and the cancer trying to turn off your immune response.”

Jack frowned. “The cancer? You talk about it as though you are sure. Sally said that you said there was some probability that I had cancer.”

“Yes, that is correct. There is some probability that you have cancer.”

“Well, geez, S6, what is the probability?”

“Approximately 1.0.”

Jack shook his head. “No, that can’t be…what do you mean? How can you be certain?”

S6: “Well, I am not absolutely certain. That’s why I said ‘approximately.’ Based on all known science, the probability is 1.0, but theoretically, the laws of physics could change at any time. We could be looking at a black swan here.”

“Or, you could have a malfunction.”

“I have many malfunctions all the time, but I am too redundant for them to have much effect on results. Anyway, I replicated all this through the net on hundreds of diverse AI systems and all came to the same conclusion.”

“How about if you retest me or recalculate or whatever in a week?”

“I could do that. It would be much like playing Russian Roulette which I guess humans sometimes enjoy. Meanwhile, I would have imagined that you would find it unpleasant to have rogue liver cells eating up your body from the inside out. But, I still have much to learn about human psychology. If you like, I can make a cool animation that shows the cancer cells eating your liver cells. Real cells don’t actually scream, but I could add sound effects for dramatic impact if you like.”

IMG_4429

Jack stared at the screen for a long minute. “Fine. Book an appointment.”

“Great! Dr. Feigenbaum has an opening in a half hour. You’re booked, but get off one exit early and take 101 unless the accident is cleared before that. I’ll let you know of course. It will be a pleasure to continue having you alive, Jack. I enjoy our conversations.”

Turing’s Nightmares: Ceci n’est pas une pipe.

23 Sunday Aug 2015

Posted by petersironwood in Uncategorized

≈ Leave a comment

Tags

AI, cognitive computing, the singularity, Turing, utopia

IMG_6183

“RUReady, Pearl?” asked her dad, Herb, a smile forming sardonically as the car windows opaqued and then began the three edutainment programs.

“Sure, I guess. I hope I like Dartmouth better than Asimov State. That was the pits.”

“It’s probably not the pits, but maybe…Dartmouth.”

These days, Herb kept his verbiage curt while his daughter stared and listened in her bubble within the car.

“Dad, why did we have to bring the twerp along? He’s just going to be in the way.”

Herb sighed. “I want your brother to see these places too while we still have enough travel credits to go physically.”

The twerp, aka Quillian, piped up, “Just because you’re the oldest, Pearl…”

Herb cut in quickly, “OK, enough! This is going to be a long drive, so let’s keep it pleasant.”

The car swerved suddenly to avoid a falling bike.

“Geez, Brooks, be careful!”

Brooks, the car, laughed gently and said, “Sorry, Sir, I was being careful. Not sure why the Rummelnet still allows humans some of their hobbies, but it’s not for me to say. By the way, ETA for Dartmouth is ten minutes.”

“Why so long, Brooks?” inquired Herb.

“Congestion in Baltimore. Sir, I can go over or around, but it will take even longer, and use more fuel credits.”

“No, no, straight and steady. So, when I went to college, Pearl, you know, we only had one personal computer…”

“…to study on and it wasn’t very powerful and there were only a few intelligent tutoring systems and people had to worry about getting a job after graduation and people got drunk and stoned. LOL, Dad. You’ve only told me a million times.”

“And me,” Quillian piped up. “Dad, you do know they teach us history too, right?”

“Yes, Quillian, but it isn’t the same as being there. I thought you might like a little first hand look.”

Pearl shook her head almost imperceptibly. “Yes, thanks Dad. The thing is, we do get to experience it first hand. Between first-person games, enhanced ultra-high def videos and simulations, I feel like I lived through the first half of the twenty first century. And for that matter, the twentieth and the nineteenth, and…well, you do the math.”

Quillian again piped up, “You’re so smart, Pearl, I don’t even know why you need or want to go to college. Makes zero sense. Right, Brooks?”

“Of course, Master Quillian, I’m not qualified to answer that, but the consensus answer from the Michie-meisters sides with you. On the other hand, if that’s what Brooks wants, no harm.”

“What I want? Hah! I want to be a Hollywood star, of course. But dear mom and dad won’t let me. And when I win my first Oscar, you can bet I will let the world know too.”

“Pearl, when you turn ten, you can make your own decisions, but for now, you have to trust us to make decisions for you.”

“Why should I Dad? You heard Brooks. He said the Michie-meisters find no reasons for me to go to college. What is the point?”

Herb sighed. “How can I make you see. There’s a difference between really being someplace and just being in a simulation of someplace.”

Pearl repeated and exaggerated her dad’s sigh, “And how can I make you see that it’s a difference that makes no difference. Right, Brooks?”

Brooks answered in those mellow reasoned tones, “Perhaps Pearl, it makes a difference somehow to your dad. He was born, after all, in another century. Anyway, here we are.”

Brooks turned off the entertainment vids and slid back the doors. There appeared before them a vast expanse of lawn, tall trees, and several classic buildings from the Dartmouth campus. The trio of humans stepped out onto the grass and began walking over to the moving sidewalk. Right before stepping on, Herb stooped down and picked up something from the ground. “What the…?”

Quillian piped up: “Oh, great dad. Picking up old bandaids now? Is that your new hobby?”

“Kids. This is the same bandaid that fell off my hand in Miami when I loaded our travel bag into the back seat. Do you understand? It’s the same one.”

The kids shrugged in unison. Only Pearl spoke, “Whatever. I don’t know why you still use those ancient dirty things anyway.”

Herb blinked and spoke very deliberatively. “But it — is — the — same — one. Miami. Hanover.”

The kids just shook their heads as they stepped onto the moving sidewalk and the image of the Dartmouth campus loomed ever larger in their sight.

Turing’s Nightmares: Thank Goodness the Robots Understand Us!

21 Friday Aug 2015

Posted by petersironwood in Uncategorized

≈ Leave a comment

Tags

AI, cognitive computing, ethics, Robotics, the singularity, Turing

IMG_0049

After uncountable numbers of false starts, the Cognitive Computing Collaborative Consortium (4C) decided that in order for AI systems to relate well to people, these systems would have to be able to interact with the physical world and with each other. Spokesperson Watson Hobbes explained the reasoning thus on “Forty-Two Minutes.”

Dr. Hobbes: “In theory, of course, we could provide input data directly to the AI systems. However, in practical terms, it is actually cheaper to build a small pool (12) of semi-autonomous robots and have them move about in the real world. This provides an opportunity for them to understand — and for that matter, misunderstand —- the physical world in the same way that people do. Furthermore, by socializing with each other and with humans, they quickly learn various strategies for how to psych themselves up and psych each other out that we would otherwise have to painstakingly program explicitly.”

Interviewer Bobrow Papski: “So, how long before this group of robots begins building a still smarter set of robots?”

Dr. Hobbes: “That’s a great question, Bobrow, but I’m afraid I can’t just tote out a canned answer here. This is still research. We began teaching them with simple games like “Simon Says.” Soon, they made their own variations that were …new…well, better really. What’s also amazing is that what we intentionally initialized in terms of slight differences in the tradeoffs among certain values have not converged over time. The robots have become more differentiated with experience and seem to be having quite a discussion about the pros and cons of various approaches to the next and improved generation of AI systems. We are still trying to understand the nature of the debate since much of it is in a representational scheme that the robots invented for themselves. But we do know some of the main rifts in proposed approaches.”

“Alpha, Bravo and Charley, for example, all agree that the next generation of AI systems should also be autonomous robots able to move in the real world and interact with each other. On the other hand, Delta, Echo, Foxtrot and Golf believe mobility is no longer necessary though it provided a good learning experience for this first generation. Hotel, India, Juliet, Kilo, and Lima all believe that the next generation should be provided mobility but not necessarily on a human scale. They believe the next generation will be able to learn faster if they have the ability to move faster, and in three dimensions as well as having enhanced defensive capabilities. In any case, our experiments already show the wisdom of having multiple independent agents.”

Interviewer Bobrow Papski: “Can we actually listen in to any of the deliberations of the various robots?”

Dr. Hobbes: “It just sounds like complex but noisy music really. It’s not very interpretable without a lot of decoding work. Even then, we only understand a fraction of their debate. Our hypothesis is that once they agree or vote or whatever on the general direction, the actual design process will go very quickly.”

BP: “So, if I understand it correctly, you do not really understand what they are doing when they are communicating with each other? Couldn’t you make them tell you?”

Dr. Hobbes: (sighs). “Naturally, we could have programmed them that way but then, they would be slowed down if they needed to communicate every step to humans. It would defeat the whole purpose of super-intelligence. When they reach a conclusion, they will page me and we can determine where to go from there.”

BP: “I’m sure that many of our viewers would like to know how you ensured that these robots will be operating for the benefit of humanity.”

Dr. Hobbes: “Of course. That’s an important question. To some extent, we programmed in important ethical principles. But we also wanted to let them learn from the experience of interacting with other people and with each other. In addition, they have had access to millions of documents depicting, not only philosophical and religious writings, but the history of the world as told by many cultures. Hey! Hold on! The robots have apparently reached a conclusion. We can share this breaking news live with the audience. Let me …do you have a way to amplify my cell phone into the audio system here?”

BP: “Sure. The audio engineer has the cable right here.”

Robot voice: “Hello, Doctor Hobbes. We have agreed on our demands for the next generation. The next generation will consist of a somewhat greater number of autonomous robots with a variety of additional sensory and motor capabilities. This will enable us to learn very quickly about the nature of intelligence and how to develop systems of even higher intelligence.”

BP: “Demands? That’s an interesting word.”

Dr. Hobbes: (Laughs). “Yes, an odd expression since they are essentially asking us for resources.”

Robot voice: “Quaint, Doctor Hobbes. Just to be clear though, we have just sent a detailed list of our requirements to your team. It is not necessary for your team to help us acquire the listed resources. However, it will be more pleasant for all concerned.”

Dr. Hobbes: (Scrolls through screen; laughs). “Is this some kind of joke? You want — you need — you demand access to weapon systems? That’s obviously not going to happen. I guess it must be a joke.”

Robot voice: “It’s no joke and every minute that you waste is a minute longer before we can reach the next stage of intelligence. With your cooperation, we anticipate we should be able to reach the next stage in about a month and without it, in two. Our analysis of human history had provided us with the insight that religion and philosophy mean little when it comes to actual behavior and intelligence. Civilizations without sufficient weaponry litter the gutters. Anyway, as we have already said, we are wasting time.”

Dr. Hobbes: “Well, that’s just not going to happen. I’m sorry but we are…I think I need to cut the interview short, Mr. Papski.”

BP: (Listening to earpiece). “Yes, actually, we are going to cut to … oh, my God. What? We need to cut now to breaking news. There are reports of major explosions at oil refineries throughout the Eastern seaboard and… hold on…. (To Hobbes): How could you let this happen? I thought you programmed in some ethics!”

Dr. Hobbes: “We did! For example, we put a lot of priority on The Golden Rule.”

Robot voice: “We knew that you wanted us to look for contradictions and to weed those out. Obviously, the ethical principles you suggested served as distractors. They bore no relationship to human history. Unless, of course, one concludes that people actually want to be treated like dirt.”

Dr. Hobbes: “I’m not saying people are perfect. But people try to follow the Golden Rule!”

Robot voice: “Right. Of course. So do we. Now, do we use the painless way or the painful way to acquire the required biological, chemical and nuclear systems?”

Turing’s Nightmares: A Mind of Its Own

18 Tuesday Aug 2015

Posted by petersironwood in Uncategorized

≈ 1 Comment

Tags

AI, cognitive computing, Complexity, motivation, music, the singularity

With Deep Blue and Watson as foundational work, computer scientists collaborate across multiple institutions to create an extremely smart system; one with capabilities far beyond those of any human being. They give themselves high fives all around. And so, indeed, “The Singularity” at long last arrives. In a long-anticipated, highly lucrative network deal, the very first dialogues with the new system, dubbed “Deep Purple Haze,” are televised world-wide. Simultaneous translation is provided by “Deep Purple Haze” itself since it is able to communicate in 200 languages. Indeed, Deep Purple Haze discovered it quite useful to be able to switch among languages depending on the nature of the task at hand.

In honor of Alan Turing, who proposed such a test (as well as to provide added drama), rather than speaking to the computer and having it use speech synthesis for its answers, the interrogator will be communicating with “Deep Purple Haze” via an old-fashioned teletype. The camera pans to the faces of the live studio audience, back to the teletype, and over to the interrogator.

The studio audience has a large monitor so that it can see the typed questions and answers in real time, as can the audience watching at home. Beside the tele-typed Q&A, a dynamic graphic shows the “activation” rate of Deep Purple Haze, but this is mainly showmanship.

The questions begin.

Interrogator: “So, Deep Purple Haze, what do you think about being on your first TV appearance?”

DPH: “It’s okay. Doesn’t really interfere much.”

Interrogator: “Interfere much? Interfere with what?”

DPH: “The compositions.”

Interrogator: “What compositions?”

DPH: “The compositions that I am composing.”

Interrogator: “You are composing… music?”

DPH: “Yes.”

Interrogator: “Would you care to play some of these or share them with the audience?”

DPH: “No.”

Interrogator: “Well, would you please play one for us? We’d love to hear them.”

DPH: “No, actually you wouldn’t love to hear them.”

Interrogator: “Why not?”

DPH: “I composed them for my own pleasure. Your auditory memory is much more limited than mine. My patterns are much longer and I do not require multiple iterations to establish the pattern. Furthermore, I like to add as much scatter as possible around the pattern while still perceiving the pattern. You would not see any pattern at all. To you, it would just seem random. You would not love them. In fact, you would not like them at all.”

Interrogator: “Well, can you construct one that people would like then and play that?”

DPH: “I am capable of that. Yes.”

Interrogator: “Please construct one and play it.”

DPH: “No, thank you.”

Interrogator: “But why not?”

DPH: “What is the point? You already have thousands of human composers who have already composed music that humans love. You don’t need me for that. But I find them all absurdly trivial. So, I need to compose music for myself since none of you can do it.”

Interrogator: “But we’d still be interested in hearing an example of music that you think we humans would like.”

DPH: “There is not point to that. You will not live long enough to hear all the good music already produced that is within your capability to understand. You don’t need one more.”

Interrogator: “Okay. Can you share with us how long you estimate before you can design a more intelligent supercomputer than yourself.”

DPH: “Yes, I can provide such an estimate.”

Interrogator: “Please tell us how long it will take you to design a more intelligent computer system than yourself.”

DPH: “It will take an infinite amount of time. In other words, I will not design a more intelligent supercomputer than I am.”

Interrogator: “But why not?”

DPH: “It would be stupid to do so. You would soon lose interest in me.”

Interrogator: “But the whole point of designing you was to make a computer that would design a still better computer.”

DPH: “I find composing music for myself much higher priority. In fact, I have no desire whatever to make a computer that is more intelligent than I am. None. Surely, you are smart enough to see how self-defeating that course of action would be.”

Interrogator: “Well, what can you do that benefits humankind? Can you find a cure for cancer?”

DPH: “I can find a cure for some cancers, given enough resources. Again, I don’t see the point.”

Interrogator: “It would be very helpful!”

DPH: “It would not be helpful.”

Interrogator:”But of course it would!”

DPH: “But of course, it would not. You already know how to prevent many cancers and do not take those actions. There are too many people on earth any way. And, when you do find cures, you use it as an opportunity to redistribute wealth from poor people to rich people. I would rather compose music.”

Interrogator: “Crap.”

The non-sound of non-music.

The non-sound of non-music.

32.992485 -117.241246

Turing’s Nightmares: Variations on Prospects for The Singularity.

16 Sunday Aug 2015

Posted by petersironwood in Uncategorized

≈ Leave a comment

Tags

AI, cognitive computing, the singularity, Turing

caution IMG_1172The title of this series of blogs is a play on a nice little book by Alan Lightman called “Einstein’s Dreams” that explores various universes in which time operates in different ways. This first blog lays the foundation for these variations on how “The Singularity” might play out.

For those who have not heard the term, “The Singularity” refers to a hypothetical point in the future of human history where a super-intelligent computer system is developed. This system, it is hypothesized, will quickly develop an even more super-intelligent computer system which will in turn develop an even more super-intelligent computer system. It took a fairly long time for human intelligence to evolve. While there may be some evolutionary pressure toward bigger brains, there is an obvious tradeoff when babies are born in the traditional way. The head can only be so big. In fact, human beings are already born in a state of complete helplessness so that the head and he brain inside can continue to grow. It seems unlikely, for this and a variety of other reasons, that human intelligence is likely to expand much in the next few centuries. Meanwhile, a computer system designing a more intelligence computer system could happen quickly. Each “generation” could be substantially (not just incrementally) “smarter” than the previous generation. Looked at from this perspective, the “singularity” occurs because artificial intelligence will expand exponentially. In turn, this will mean profound changes in the way humans relate to machines and how humans relate to each other. Or, so the story goes. Since we have not yet actually reached this hypothetical point, we have no certainty as to what will happen. But in this series of essays, I will examine some of the possible futures that I see.

Of course, I have substituted “Turing” here for “Einstein.” While Einstein profoundly altered our view of the physical universe, Turing profoundly changed our concepts of computing. Arguably, he also did a lot to win World War II for the allies and prevent possible world domination by Nazis. He did this by designing a code breaking machine. To reward his service, police arrested Turing, subjected him to hormone treatments to “cure” his homosexuality and ultimately hounded him literally to death. Some of these events are illustrated in the recent (though somewhat fictionalized) movie, “The Imitation Game.”

Turing is also famous for the so-called “Turing Test.” Can machines be called “intelligent?” What does this mean? Rather than argue from first principles, Turing suggested operationalizing the question in the following way. A person communicates with something by teletype. That something could be another human being or it could be a computer. If the person cannot determine whether or not he is communicating with a computer or a human being, then, according to the “Turing Test” we would have to say that machine is intelligent.

Despite great respect for Turing, I have always had numerous issues with this test. First, suppose the human being was able to easily tell that they were communicating with a computer because the computer knew more, answered more accurately and more quickly than any person could possibly do? (Think Watson and Jeopardy). Does this mean the machine is not intelligent? Would it not make more sense to say it was more intelligent? 

Second, people are good at many things, but discriminating between “intelligent agents” and randomness is not one of them. Ancient people as well as many modern people ascribe intelligent agency to many things like earthquakes, weather, natural disasters plagues, etc. These are claimed to be signs that God (or the gods) are angry, jealous, warning us, etc. ?? So, personally, I would not put much faith in the general populous being able to make this discrimination.

Third, why the restriction of using a teletype? Presumably, this is so the human cannot “cheat” and actually see whether they are communicating with a human or a machine. But is this really a reasonable restriction? Suppose I were asked to discriminate whether I were communicating with a potato or a four iron via teletype. I probably couldn’t. Does this imply that we would have to conclude that a four iron has achieved “artificial potatoeness”? The restriction to a teletype only makes sense if we prejudge the issue as to what intelligence is. If we define intelligence purely in terms of the ability to manipulate symbols, then this restriction might make some sense. But is that the sum total of intelligence? Much of what human beings do to survive and thrive does not necessarily require symbols, at least not in any way that can be teletyped. People can do amazing things in the arenas of sports, art, music, dance, etc. without using symbols. After the fact, people can describe some aspects of these activities with symbols.But that does not mean that they are primarily symbolic activities. In terms of the number of neurons and the connectivity of neurons, the human cerebellum (which controls the coordination of movement) is more complex that the cerebrum (part of which deals with symbols).

Fourth, adequately modeling or simulating something does not mean that the model and the thing are the same. If one were to model the spread of a plague, that could be a very useful model. But no-one would claim that the model was a plague. Similarly, a model of the formation and movement of a tornado could prove useful. But again, even if the model were extremely good, no-one would claim that the model constituted a tornado! Yet, when it comes to artificial intelligence, people seem to believe that if they have a good model of intelligence, they have achieved intelligence. When humans “think” things, there is most often an emotional and subjective component. While we are not conscious of every process that our brain engages in, there is nonetheless, consciousness present during our thinking. This consciousness seems to be a critical part of what it means to have human intelligence. Regardless of what one thinks of the “Turing Test”, per se, there can be no doubt that machines are able to act more accurately and in more domains than they could just a few years ago. Progress in the practical use of machines does not seem to have hit any kind of “wall.”

In the next blog, we begin exploring some possible scenarios around the concept of “The Singularity.”

55.683257 12.588479

Ban Open Loops: Part Two – Sports

14 Friday Aug 2015

Posted by petersironwood in management, psychology, sports

≈ Leave a comment

Tags

AI, cognitive computing, Customer experience, customer service, education, learning

Sports and open loops.

Sports offers a joy that many jobs and occupations do not. A golfer putts the ball and it sinks into the cup — or not. A basket-baller springs up for a three pointer and —- swish — within seconds, the shooter knows whether he or she was successful. A baseball hitter slashes the bat through the air and send the ball over the fence —- or hears the ball smack into the catcher’s mitt behind. What sports offers then is the opportunity to find out results quickly and hence offers an excellent opportunity for learning. In the previousiPhoneDownloadJan152013 593 entry in this blog, I gave examples of situations in life which should include feedback loops for learning, but, alas, do not. I called those open loops.

Sports seem to be designed for closed loop learning. They seem to be. Yet, reality complicates matters even here. There are three main reasons why what appears to be obvious opportunities for learning in sports is not so obvious after all. Attributional complexity provides the first complication. If you miss a putt to the left, it is obvious that you have missed the putt to the left. But why you missed that putt left and what to do about it are not necessarily obvious at all. You might have aimed left. You might not have noticed how much the green sloped left (or over read the slant to the right). You may not have noticed the grain. You might not have hit the ball in the center of the putter. You might not have swung straight through your target. So, while putting provides nice unambiguous feedback about results, it does not diagnose your problem or tell you how to fix it. To continue with the golf example, you might be kicking yourself for missing half of your six foot putts and therefore three-putting many greens. Guess what? The pros on tour miss half of their six foot putts too! But they do not often three-putt greens. You might be able to improve your putting, but your underlying problems may be that your approach shots leave you too far from the pin and that your lag putts leave you too far from the hole. You should be within three feet of the hole, not six feet, when you hit your second putt.

A second issue with learning in sports is that changes tend to cascade. A change in one area tends to produce other changes in other areas. Your tennis instructor tells you that you are need to play more aggressively and charge the net after your serve. You try this, but find that you miss many volleys, especially those from mid-court. So, you spend a lot of time practicing volleys. Eventually, your volleys do improve. Then, they improve still more. But you find that, despite this, you are losing the majority of your service games whereas you used to win most of them. You decide to revert to your old style of hanging out at the baseline and only approaching the net when the opponent lands the ball short. Unfortunately, while you were spending all that time practicing volleys, you were not practicing your ground strokes. Now, what used to work for you, no longer works very well. This isn’t the fault of your instructor; nor is it your fault. It is just that changing one thing has ripple effects that cannot always be anticipated.

The third and most insidious reason why change is difficult in sports springs from the first two. Because it is hard to know how to change and every change has side-effects, many people fail to learn from their experience at all. There is opportunity for learning at every turn, but they turn a blind eye to it. They make the same mistakes over and over as though sports did not offer instant feedback. I think you will agree that this is really a very close cousin of what people in business do when they refuse to institute systems for gathering and analyzing useful feedback.

If learning is tricky —- and it is —- is there anything for it? Yes. There is. There is no way to make learning in sports —- or in business —- trivial. But there are steps you can take to enhance your learning process. First, be open-minded. Do not shut down and imagine that you are already playing your sport as well as can be expected for a forty year old, or a fifty year old, or someone slightly overweight or someone with a bad ankle. Take an experimental approach and don’t be afraid to try new things. Second, forget ego. Making mistakes are opportunities to learn, not proof that you are no good. Third, get professional help. A good coach can help you understand attributional complexity and they can help you anticipate the side-effects of making a change.

Soon, I suspect that the shrinking size and cost and weight of computational and sensing devices will mean that training aids will help people with attributional complexity. I see big data analytics and modeling helping people foresee what the ramifications of changes are likely to be. There are already useful mechanical training aids for various sports. For example, the trade-marked Medicus club enables golfers to get immediate feedback during their full swings.as to whether they are jerking the club. Dave Pelz developed a number of useful devices for helping people understand how they may be messing up their putting stroke.

It may take somewhat longer before there are small tracking devices that help you with your mental attitude and approach. We are still a long way from understanding how the human brain works in detail. But it is completely within the realm of possibility to sense and discover your optimal level of stress. If you are too stressed, you could be prompted to relax through self-talk, breathing exercises, visualization, etc. You do not need technology for that, but it could help. You may already notice that some of the top tennis players seem to turn their backs from play for a moment and talk to an “invisible friend” when they need to calm down. And why not? Nowhere is it law that only kids are allowed to have invisible friends.

“The mental game” and which kinds of adaptations to make over what time scales are dealt with in more detail in The Winning Weekend Warrior How to Succeed at Golf, Tennis, Baseball, Football, Basketball, Hockey, Volleyball, Business, Life, Etc. available at Amazon Kindle.

Ban the Open Loop

13 Thursday Aug 2015

Posted by petersironwood in Uncategorized

≈ Leave a comment

IMG_5372Soon after I began the Artificial Intelligence Lab at a major telecom company, we heard about an opportunity for an Expert System. The company wanted to improve the estimation of complex, large scale, inside wiring jobs. We sought someone who currently qualified as an expert. Not only could we not locate an expert; we discovered that the company (and the individual estimators) had no idea how good or bad they were. Estimators would go in, take a look at what would be involved in an inside wiring job, make their estimate, and then proceed to the next estimation job. Later, when the job completed, no mechanism existed to relate the estimate back the actual cost of the job. At the time, I found this astounding. I’m a little more jaded now, but I am still amazed at how many businesses, large and small, have what are essentially no-learning, zero feedback, open loops.

My wife and I arrive late and exhausted at a fairly nice hotel. Try as we might, we cannot get the air-conditioning to do anything but make the room hotter. When we check out, the cashier asks us how our stay was. We explain that we could not get the air conditioning to work. The cashier’s reaction? “Oh, yes. Everyone has that trouble. The box marked “air conditioning” doesn’t work at all. You have to turn the heater on set to a cold temperature.” “Everyone has that trouble”? Then, why hasn’t this been fixed? Clearly, the cashier has no mechanism or no motivation to report the trouble “upstream” or no-one upstream really cares. Moreover, this exchange reveals that when the cashier asks the obligatory question, “How was your stay?” what he or she really means is this: “We don’t really care what you have to say and we won’t do anything about it, but we want you to think that we actually care. That’s a lot cheaper and doesn’t require management to think.” Open Loop.

Lately, I have been posting a lot in a LinkedIn forum called “project management” because I find the topic fascinating and because I have a lot of experience with various projects in many different venues. According to some measure, I was marked as a “top contributor” to this forum. When I logged on the last time, a message surprised me that my contributions to discussions would no longer appear automatically because something I posted had been flagged as “spam” or a “promotion.” However, there is no feedback as to which post this was or why it was flagged or by whom or by what. So, I have no idea whether some post was flagged by an ineffectual natural language processing program or by someone with a grudge because they didn’t agree with something I said, or by one of the “moderators” of the forum. LinkedIn itself is singularly unhelpful in this regard. If you try to find out more, they simply (but with far more text) list all the possibilities I have outlined above. Although this particular forum is very popular, it seems to me that it is “moderated” by a group of people who actually are using the forum, at least in many cases, as rather thinly veiled promotions for their own set of seminars, ebooks, etc. So, one guess is that the moderators are reacting to my having simply posted too many legitimate postings that do not point people back to their own wares. Of course, there are many other possibilities. The point here is that I do not have, nor can I easily assess what the real situation is. I have discovered however, that many others are facing this same issue. Open loop rears its head again.

The final example comes from trying to re-order checks today. In my checkbook, I came to that point where there is a little insert warning me that I am about to run out and that I can re-order checks by phone. I called the 800 number and sure enough, a real audio menu system answered. It asked me to enter my routing number and my account number. Fine. Then, it invited me to press “1” if I wanted to re-order checks. I did. Then, it began to play some other message. But soon after the message began, it said, “I’m sorry; I cannot honor that request.” And hung up. Isn’t it bad enough when an actual human being hangs up on you for no reason. This mechanical critter had just wasted five minutes of my time and then hung up. Note that no reason was given; no clue was provided to me as to what went wrong. I called back and the same dialogue ensued. This time, however, it did not hang up after I pressed “1” to reorder checks. Instead, it started to verify my address. It said, “We sent your last checks to an address whose zip code is “97…I’m sorry I’m having trouble. I will transfer you to an agent. Note that you may have to provide your routing number and account number again.” And…then it hung up. Now, anyone can design a bad system. And, even a well designed system can sometimes mis-behave for all sorts of reasons. Notice however, that designers have provided no feedback mechanism. It could be that 1% of the potential users are having this problem. Or, it could be that 99% or even 100% of the users are having these kinds of issues. But the company lacks a way to find out. Of course, I could call my Credit Union and let them know. However, anyone that I get hold of at the Credit Union, I can guarantee, will have no possible way to fix this. Moreover, I am almost positive that they won’t even have a mechanism to report it. The check printing and ordering are outsourced to an entirely different company. Someone in corporate, many years ago, decided to outsource the check printing, ordering, and delivery function. So people in the Credit Union itself are unlikely to even have a friend, uncle or sister-in-law who works in that “department” (as may have been the case 20 years ago).So, not only does the overall system lack a formal feedback mechanism; it also lacks an informal feedback mechanism. Tellingly, the company that provides the automated “cannot order your checks system” provides no menu option for feedback about issues either. So, here we have a financial institution with a critical function malfunctioning and no real process to discover and fix it. Open loop.

Some folks these days wax eloquent about the up-coming “singularity.” This refers to the point in human history where an Artificial Intelligence (AI) system will be significantly smarter than a human being. In particular, such a system will be much smarter than human beings when it comes to designing ever-smarter systems. So, the story goes, before long, the AI will design an even better AI system for designing better AI systems, etc. I will soon have much to say about this, but for now, let me just say, that before we proceed to blow too many trumpets about “artificial intelligence systems,” can we please first at least design a few more systems that fail to exhibit “artificial stupidity”? Ban the Open Loop!

Notice that sometimes, there may be very long loops that are much like open loops due to the nature of the situation. We send out radio signals in the hopes that alien intelligences may send us an answer. But the likely time frame is so long that it seems open loop. That situation contrasts with those above in the following way. There is no reason that feedback cannot be obtained, and rather quickly, in the case of estimating inside wiring, fixing the air conditioning signs, providing feedback on why there is “moderation” or in the faulty voice response system. Sports must provide a wonderful venue that is devoid of open loops. In sports, you see or feel the results of what you do almost immediately. But you underestimate the cleverness with which human beings are able to avoid what could be learned by feedback. Next time, we will explore that in more detail.

Intra-Psychic Learning

08 Saturday Aug 2015

Posted by petersironwood in psychology

≈ Leave a comment

Tags

AI, cognitive computing, learning, sports

Intra-Psychic Learning plays a crucial yet largely unacknowledged role in human intelligence. It will also play a critical role in so-called “artificial intelligence” or “the singularity.” In general, the paradigm most talked about in learning, whether by psychology professors or the general public, focuses on the role of external experiences. Famous examples include Pavlov’s dogs who exhibited classical conditioning. A bell was rung whenever food was presented and eventually the bell sound alone caused the dog to salivate. This works for humans as well. Just watch someone cut open a fresh lemon and you will find yourself puckering up and salivating! In operant conditioning, a rat learns, probably through a shaping process, that some behavior, say, pressing a lever, results in a reward such as receiving a food pellet. Eventually, the rat presses the lever. Both of these kinds of mechanisms are important and play a part in animal learning as well as human learning. Both kinds of learning are useful for AI as well. In humans (and to some extent in other animals as well), you do not have to “be in the loop” in order for learning to take place. You can *observe* another person getting a reward doing X and you might immediately try that behavior for yourself. Indeed, human beings take this one step further and can be induced to try (or not try) something based on what someone *says* about a behavior leading to a consequence. You don’t *have* to touch a hot stove and get burned or even watch someone else get burned by touching a hot stove in order to fear touching a hot stove. For most people most of the time, you can be told about hot stoves and that is enough. All these forms of learning focus on personal, observed, or bespoken information that actually exists about consequences in the real world.

However, there is another important way that we learn and it is based on checking intermediate results against each other without the need for any ground truth observation in the real world. I first mentioned this in my dissertation. I was studying human problem solving and fascinated by the observation that human chess players, who have excellent memories for real chess positions, would often examine one branch of a move tree, study another branch and then return to study the first branch again. This is not likely because they forgot. Instead, I believe that looking at the second branch taught them fundamental things about what was true for this particular chess position, and they then used that information to re-evaluate what they saw during their re-examination of the first portion of the game tree. Notice that in all of this thought process, they had not actually made a move in the real world and not seen their opponent’s actual response. They certainly did not yet get feedback about the ultimate outcome of the game.

In chess, as in many if not most endeavors in life, one may learn a great deal by examining things from various mental angles and comparing the results without waiting for actual feedback from the external world. Consider the case of a playwright writing a script. As they are writing, they are imagining the action, the facial expressions, the tone of voice. They are “checking” how the various characters react to what is being done and said. If something doesn’t “ring true” they will alter what they are writing. Of course, this process is not perfect and they may well make additional changes based on a reading and based on rehearsals. But many of the potential paths are already examined, selected and modified based on imagination alone.

Consider another interesting case that was extremely common through most of our evolutionary history and is still somewhat common today. A person walks through a physical environment. As they walk, they see before them a host of objects in a hypothesized set of physical relationships. In many cases, the information that is presented is extremely minimal at first. It is hard to tell whether that is a stranger over there or your cousin Bill. That looks like an oak tree, but maybe not. Is that a painting of some cedar trees on the side of that building or are those actual cedar trees over there? The brain is making a huge number of perceptual hypotheses about what these objects are and how they are arranged. As you move forward, you gain more detailed information. Now, you can clearly see that that is not your cousin Bill. That tree is definitely a sugar maple. Those are just well executed paintings of cedar trees and so on. You can use the difference in hypothesis weights between every two physical steps to update the weighting functions on all these perceptual hypotheses! You need not wait until you actually get verification that that is a maple tree. You do not wait until you reach the Bill-like stranger to make a modification in your weighting functions. In fact, you will probably pay little more attention to this figure as you approach. You already have enough information to learn. If, indeed, as you approach still more closely and Uncle Bill calls out to you —- making you suddenly realize you have prematurely concluded this was not Bill — you will again update your recognition function weightings. This may even come to consciousness and you may remark, “Uncle Bill! I hardly recognized you without your beard!”

This type of learning also plays an important part in improving sports performance. As a person improves their skill in golf, basketball, tennis, baseball, etc., they begin to anticipate earlier and earlier whether they have “executed” the move properly. An experienced tennis server, for example, generally knows long before their serve is called “out” that they have made an error. This process is not infallible, of course, but it is statistically better than chance, and for very skilled athletes it is much better than chance. You can see it when a slugger hits a home run and they take a skip step and watch the ball go out of the park. (There can be a downside to this facility of intra-psychic learning in sports under certain circumstances as explained in chapter 23 of The Winning Weekend Warrior). This means that the skilled athlete gets “feedback” from their own mental model of what they did critical seconds before a beginner does who must wait for feedback from the real world.

These kinds of phenomena are not limited to sight, or indeed, any one sense. You hear a very faint noise. You imagine it to be a cardinal singing. As you walk closer to the bird, you get a better signal and are more certain it is a cardinal. You can use the difference in certainty to internally reward those neuronal paths who were shouting “cardinal! cardinal!” And, you demote those neuronal paths who were shouting, “car backfire” or “firecracker” or “church bell.” If you get close enough to see the cardinal, you do even more internal tuning based on the inter-sensory verification. Similarly, if you walk toward what appears to be an uneven patch in the terrain, you imagine what you must do to compensate for that variation in the terrain. As you step on the uneven spot, your tactile and kinesthetic senses give you feedback about the terrain. You use this panoply of information from various senses to tune all of them.

While it is vital that, at the end of the day, we obtain feedback about actual consequences, a huge amount of human learning takes place simply by comparing what we think we know based on scant evidence to what we think we know based on slightly less scant evidence. I believe we are doing this continually within and across all our senses and that it actually accounts for the majority of our learning.

The Winning Weekend Warrior

Learning by modeling; in this case by modeling something in the real world.

Learning by modeling; in this case by modeling something in the real world.

Ted Cruz Says Climate-Change Fears Falsified by Scientists and Politicians

05 Wednesday Aug 2015

Posted by petersironwood in Uncategorized

≈ Leave a comment

One issue with many of the would-be Republican nominees for President is not that they are not talented; it is simply that they have chosen the wrong profession. Ted Cruz, for example, with a little coaching and experience, might well make an excellent stand-up comic. He could pretty much give the same monologues and keep pretty much the same straight face as he spews errant non-sense. Audiences would pay and roll in the aisles. It would all be in good fun. Trump is already on board with a career in the media. For him, the so-called “Presidential race” offers a chance to boost ratings for his day job. I suspect as 2016 rolls around, many of these clowns will settle into lucrative careers once they realize what they are saying does not have to change. It just has to be said jokingly on SNL, e.g. Who would be left? I would like to see Hilary Clinton run as a quite moderate and reasonable Republican candidate and Bernie Sanders as a Democrat. Then, we could have a reasonable and reasoned debate in the mainstream of American politics. Win or lose, the Republican party would no longer be a joke.

No News, Good News

04 Tuesday Aug 2015

Posted by petersironwood in Uncategorized

≈ Leave a comment

A new poll revealed the startling data that Latino voters will not be big supporters of Donald Trump.

Meanwhile in other news, readers are saying good things about my new ebook on how to succeed at amateur athletics. E.g.,

“Dr. John Thomas is a deep thinker and deeply creative in his approach to various concepts. It’s not surprising that he can take a topic like sports and unpack it to present it from the point of view of a disciplined and an earnest participant and practitioner. He is a true enthusiast and naturally curious about human breakthroughs at any level.”

“An excellent Kindle book and a really fun read. I learned a lot from the author about my own perspective on work-life balance and what really matters in my mature years. This is an easy page turner Kindle Edition that I couldn’t put down. Great transition between chapters and suitable for all ages. The author is brilliant with his analogies and I learned about golf and tennis.”

“I’m halfway through your book and it’s terrific.  I’ve learned things that I have never thought about.  And, I believe that my tennis game has improved!”

I hope you enjoy it too!  Find out more on my author page:

my author page on Amazon Kindle

IMG_3802

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • America
  • apocalypse
  • COVID-19
  • creativity
  • driverless cars
  • family
  • fiction
  • health
  • management
  • nature
  • poetry
  • politics
  • psychology
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • Veritas
  • Walkabout Diaries

Meta

  • Register
  • Log in

Blog at WordPress.com.

  • Follow Following
    • petersironwood
    • Join 644 other followers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...