• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: HCI

Study Slain by Swamp Monster!

19 Thursday Jul 2018

Posted by petersironwood in America, management, psychology, Uncategorized

≈ 8 Comments

Tags

Business, Design, experiment, HCI, human factors, innovation, politics, science, Study, usability, UX

Study Slain by Swamp Monster!

IMG_3383

I’m trying a new format for blog posts. 

For those of you in a hurry, to get to the “bottom line” of this post, you can skip the story and go right to the bold-faced “lesson” at the end. I’d really you rather read the whole thing of course, but I know some readers are harried and hurried. So, if that describes you right now, feel free. 

——————————————————

In the early 1980’s, researchers at the IBM Watson Research Center invented a new kind of system. Originally, this was called the “Speech Filing System.” It was initially designed to allow so-called “office principals” (sales people, managers, executives, engineers, etc.) to dictate letters and memos which could then be typed up by the pool of typists. Instead of requiring each “office principal” to have (or borrow) a dedicated piece of dictation equipment, they could accomplish this dictation from any touch tone phone. While this offered some savings in cost and convenience in the office, it was even more wonderful on the road. People did not have to take their dictation equipment with them on their travels. They could use any touch-tone phone. 

antique business call collector s item

Photo by Pixabay on Pexels.com

The system was invented largely by tech-savvy psychologists (including Stephen Boies, John Gould, John Richards, & Jim Schoonard). When they observed people actually using the system, they discovered that the trial users more often used the ancillary messaging facility than they did the “real” dictation features. So, the system was redesigned and repurposed and then renamed, “The Audio Distribution System.” In some ways, using the “Audio Distribution System” was much like leaving a message on an answering machine. However, there were some crucial differences. Typically, a person calling someone and encountering, instead of a human being, a message asking them to leave another message was somewhat taken aback. Many messages on answering machines went something like this: “Hi. Stephen? Oh, you’re not there. OK.  This is John. I was hoping … well, I thought you’d be in. Uh. Let’s see. You know what? Call me back. We need to talk.” And, when Stephen discovered that he had a message, he might listen to it and call back John. “Hi, John. Stephen here… I … oh. OK. A message. Sorry. You just called me. Well, um. I’m not sure what you wanted to talk about so. Call me back when you get a chance.”

hands animal zoo black

Photo by Public Domain Pictures on Pexels.com

By contrast, when someone called the “Audio Distribution System” they knew ahead of time they’d be interacting with a machine. So, they could compose a reasonable message before calling the system. Hence, the messages tended to be more coherent and useful; e.g., “Hi, Stephen. This is John. If it’s okay with you, I’m taking off this Friday for a long weekend. If you have any issues with that, let me know.” See? Easy and efficient. 

A second critical difference was that you could listen to your message and edit it. People didn’t do this so often as you might think, but it was comforting to know that you could in case you really messed up. (For instance, a person might say, “You are fired!” when all along they meant to say, “You are NOT fired.”). 

IMG_9198

Introducing any new system will have consequences, both intended and unintended. I wanted to see what some of these consequences might be. Corporations, IBM included, like it when they sell lots of product and make lots of money. A related question then was – what is the value of this product to the customer? Why should they want to buy it? 

One hypothesis I wanted to test out was that such a system would increase people’s perceived Peace of Mind. After you leave a meaningful message for someone, you can “cross off” that little item off your mental (or written) “to do” list. By using the Audio Distribution System, I thought one of the user benefits would be increased “Peace of Mind” because they would be able to leave a message any time and any place they had access to a touch tone phone. They could save their working memory capacity for “higher level” activities such as design, problem solving, and decision making. We were going to roll out a beta test of the Audio Distribution System at the divisional headquarters for the IBM Office Products Division (OPD), in Franklin Lakes, New Jersey. Not coincidentally, OPD would be the division selling the Audio Distribution System (just as they were now selling dictation equipment). Before the trial commenced, I developed a questionnaire designed to get at how much people felt harried, too busy, coping, etc. The hope was that I could compare the “Peace of Mind” scores of people who did and did not get the Audio Distribution System and perhaps show that those with the system felt more at peace than those without. I could also compare “before and after” for those internal beta customers who had the system. 

photo of golden gautama buddha

Photo by Suraphat Nuea-on on Pexels.com

Before I was to roll-out and administer the “Peace of Mind” questionnaire to a sample of people at the OPD Franklin Lakes location, guess what happen just two days before the beta roll-out? OPD was re-organized out of existence! The people who worked there would now be looking for another job elsewhere in IBM (or, failing that, just elsewhere period). The beta trial was cancelled. In any case, even if it hadn’t been cancelled, the impact of the re-organization would have completely swamped (in my estimation) the impact of this new tool. Moreover, it struck me as insensitive and slightly even unethical to ask people to fill out a questionnaire about how hassled they were feeling just days after finding out their entire division had been blown up. How would you react if some psychologist from the Research Center showed up asking you to fill out a questionnaire two days after finding out you no longer had a job?

photography of green and red fire works display

Photo by Anna-Louise on Pexels.com

————————————————-

What is the lesson learned here? You have to understand what is going on in the lives of your users over and above the functions and features directly related to your product or service. Of course, there is always a fairly good chance that some of your users will have overwhelming things going on in their lives that will impact their reactions to your product. Generally you won’t know about divorces, deaths in the family, toothaches, etc. But if something is impacting all your users, you’d best be aware of it and act accordingly. 

————————————-

Speech Filing System

Audio Distribution System – NY Times

Longer explanation of Audio Distribution System

Video of Audio Distribution System’s cousin: “The Olympic Message System”

———————————————————————

Author Page on Amazon

 

In the Brain of the Beholder

17 Tuesday Jul 2018

Posted by petersironwood in America, management, psychology, Uncategorized

≈ 1 Comment

Tags

Design, experiment, HCI, human factors, politics, psychology, science, UX

In the Brain of the Beholder. 

MikeHurdles

Most people in the related fields of “Human Factors”, “User Experience”, and “Human Computer Interaction” learn how to run experiments. Formal study often largely focuses on experimental design and statistics. Indeed, these are important subjects. In today’s post though, I want to relate three experiences with actually running experiments. Just for fun, let’s go in reverse chronological order. 

In graduate school at the University of Michigan Experimental Psychology department, one of my classmates told us about an experiment he had just conducted. Often, we designed experiments in which a strictly timed sequence of stimuli (e.g., printed words, spoken words, visual symbols) were presented and then we measured how long it took the “subject” to respond (e.g., press a lever, say a word). Typically, these stimuli were presented fairly quickly, perhaps 1 every second or at most every 4-5 seconds. This classmate, however, had felt this was too stressful and wanted to make the situation less so for the subjects. So, instead of having the stimuli presented, say, every 4 seconds, my classmate decided to be more humane and make the experiment “self-paced.” In other words, no matter how long the subject took to make a response, the next stimulus would be presented 1 second later. So, how did this “kindness” work out in practice? 

IMG_9172

A few days later, I heard a scream in the lab down the hall and ran in to see whether everyone was okay. One of my classmate’s first subjects had just literally ran out of the experimental room screaming “I can’t take it any more! I quit!” My classmate was flabbergasted. But eventually, he got the subject to calm down and explain why they had been so upset. The subject had begun by responding carefully to the stimuli. So, perhaps they took ten seconds for the first item, and the new stimulus came up one second later. On the second go, they took perhaps 9.5 seconds and then the next stimulus came up one second later. As time went on, the subject responded more and more quickly so the next stimulus also came up more and more quickly. In the subject’s mind, the experiment was becoming more and more difficult as determined by the experimenter. They had no idea that had they slowed back down to responding once every 10 seconds, they’d only be presented with stimuli at that, much slower speed. 

So, here we have one way that these so-called subjects differ from each other. They may not interpret the experiment in the framework in which it is thought of by the experimenter. In this particular case, there was a difference in the attribution of causality, but there are many other possibilities. This is one of many reasons for doing a pilot experiment and talking with the subjects. 

The next earlier example took place at Case-Western Reserve. In my senior year, I was married and had a kid so I worked three part time jobs while going to school full-time. One of the jobs was teaching “Space Science” and “Aeronautics” to some sixth graders at the Cleveland Supplementary Educational Center. Another one of the jobs was as a Research Assistant to a Professor in the Psychology Department. We were doing an experiment with kids in an honest-to-God “Skinner Box.” The kids pulled a lever and won nickels. Meanwhile, on a screen in front of them, there appeared a large red circle and then we looked at how much the kid continued to press the lever (without winning any more nickels) when confronted with the same red circle, a smaller red circle, a red ellipse, etc. 

SolarSystem

There was a small waiting room next to the Skinner Box and that had a greenboard on it. So, since there was another kid waiting there just twiddling his thumbs, I decided to give him a little mini-lecture on the solar system: sun at the center, planets in order, some of the major moons, etc. 

After each kid had finished the experiment, I always asked them what they thought was going on during the experiment. (This was despite the fact that the Professor I was working for was a “strict behaviorist”). When I asked this kid what he thought was going on, he referred back to my lecture about the solar system! 

Oops! Just because the lecture and the experiment were two completely unrelated things in my mind didn’t mean they were for the kid! Of course, they seemed related to him! Both involved circles and they both took place at the same rather unique and unusual place: a psychology laboratory. 

And this too is worth thinking about. We psychologists and Human Factors people typically report on the design of the experiment and hopefully relate the instructions. We, however, do not typically report on a host of other things that we think of as irrelevant but may impact the subject and influence their behavior. Was the receptionist nice to them or rude? What did their friends say about going to do a psychology experiment or a UX study? When the experimenter explained the experiment and asked whether there were any questions, was that a sincere question? Or, was it just a line delivered in a rather mechanical monotone that encouraged the subject not to say a word? 

Of course, the very fact that humans differ so much is why some psychologists prefer to use rats. And, the psychologists (as well as a variety of biologists and medical doctors) don’t just use any old rats. They use rats that are carefully bred to be “lab rats.” They are expected to act in a fairly uniform fashion. And, for the most part, they do.

two gray mice

Photo by Alex Smith on Pexels.com

I was helping my girlfriend with her intro psych project. We were replicating the Yerkes-Dodson Law. This states that as you increase stress, performance improves, but only to a point. After that, additional stress causes performance to deteriorate (something that software development managers would do well to note). One of the ways I helped was to get some of the rats out of their cages. I would open up the top of the cage, reach around the rat behind their next and pull them out. Not a big deal. All the rats were quite placid and easy to handle. They all acted the same. Then, it was time to get the day’s last rat who was to be placed in the “high stress” condition. I went to the cage and opened it just as I had done for the last dozen rats. But instead of sitting there placidly and twitching it’s nose, this rat raced to the bars of his cage and hung on with both of his little legs and both of his little arms with all his might! Which might was not equal to mine but was rather incredible for such a tiny fellow. Rats sometimes squeak rather like a mouse does. But not this one! This carefully bred clone barked! Loudly! Like a dog. Whether this rat had suffered some previous trauma or was subject to some kind of odd mutation, I cannot say. 

But this I can say. Your “users” or “subjects” are not identical to each other. And, while modeling is a very useful exercise, they will never “be” identical to your model. They are always acting and reacting to a reality as beheld by them. And their reality will always be somewhat different from yours. That does not mean, however, that generalizations about people — or rats — are always wrong or that they are never useful. 

It does not mean that gravity will not affect people just because they refuse to believe in it. There really is a reality out there. And, that reality can kill rats or people in an eye blink; especially those who actively refuse to see what is happening before their very eyes. 

halloween2006006

Who knows? You might be about to be placed in the “High Stress” condition no matter how tightly you hang on to the bars of your cage – or, to your illusions.  

————————————-

Author Page on Amazon

Madison Keys, Francis Scott Key, the “Prevent Defense” and giving away the Keys to the Kingdom. 

07 Saturday Jul 2018

Posted by petersironwood in America, family, management, psychology, sports, Uncategorized

≈ 1 Comment

Tags

Business, career, HCI, human factors, IBM, life, school, sports, UX

Madison Keys, Francis Scott Key, the “Prevent Defense” and giving away the Keys to the Kingdom. 

Madison Keys, for those who don’t know, is an up-and-coming American tennis player. In this Friday’s Wimbledon match, Madison sprinted to an early 4-1 lead. She accomplished this through a combination of ace serves and torrid ground strokes. Then, in an attempt to consolidate, or protect her lead, or play the (in)famous “prevent defense” imported from losing football coaches, she managed to stop hitting through the ball – guiding it carefully instead — into the net or well long or just inches wide. 

IMG_2601

Please understand that Madison Keys is a wonderful tennis player. And, her “retreat” to being “careful” and playing the “prevent defense” is a common error that many professional and amateur players fall prey to. It should also be pointed out that what appears to be overly conservative play to me, as an outside observer, could easily be due to some other cause such as a slight injury or, even more likely, because her opponent adjusted to Madison’s game. Whether or not she lost because of using the “prevent defense” no-one can say for sure. But I can say with certainty that many people in many sports have lost precisely because they stopped trying to “win” and instead tried to protect their lead by being overly conservative; changing the approach that got them ahead. 

Francis Scott Key, of course, wrote the words to the American National Anthem which ends on the phrase, “…the home of the brave.” Of course, every nation has stories of people behaving bravely and the United States of America is no exception. For the American colonies to rebel against the far superior naval and land forces (to say nothing of sheer wealth) of the British Empire certainly qualifies as “brave.” 

IMG_8499

In my reading of American history, one of our strengths has always been taking risks in doing things in new and different ways. In other words, one of our strengths has been being brave. Until now. Now, we seem in full retreat. We are plunging headlong into the losing “prevent defense” borrowed from American football. 

American football can hardly be called a “gentle sport” – the risk of injury is ever present and now we know that even those who manage to escape broken legs and torn ligaments may suffer internal brain damage. But there is still the tendency of many coaches to play the “prevent defense.” In case you’re unfamiliar with American football, here is an illustration of the effect of the “prevent defense” on the score. A team plays a particular way for 3 quarters of the game and is ahead 42-21. If you’re a fan of linear extrapolation, you might expect that  the final score might be something like 56-28. But coaches sometimes want to “make sure” they win so they play the “prevent defense” which basically means you let the other team make first down after first down and therefore keep possession of the ball and score, though somewhat slowly. The coach suddenly loses confidence in the method which has worked for 3/4 of the game. It is not at all unusual for the team who employs this “prevent defense” to lose; in this example, perhaps, 42-48. They “let” the other team get one first down after another. 

red people outside sport

Photo by Pixabay on Pexels.com

America has apparently decided, now, to play a “prevent defense.” Rather than being innovative and bold and embrace the challenges of new inventions and international competition, we instead want to “hold on to our lead” and introduce protective tariffs just as we did right before the Great Depression. Rather than accepting immigrants with different foods, customs, dress, languages, and religions — we are now going to “hold on to what we have” and try to prevent any further evolution. In the case of American football, the prevent defense sometimes works. In the case of past civilizations that tried to isolate themselves, it hasn’t and it won’t. 

landscape photography of gray rock formation

Photo by Oleg Magni on Pexels.com

This is not to say that America (or any other country) should right now have “open borders” and let everyone in for every purpose. Nor should a tennis player hit every shot with all their might. Nor should a football team try the riskiest possible plays at every turn. All systems need to strike a balance among replication of what works, providing defense of what one has and exploring what is new and different. That is what nature does. Every generation “replicates” aspects of the previous generation but every generation must also explore new directions. Life does this through sexual selection, mutation, and cross over. 

This balance plays out in career as well. You need to decide for yourself how much and what kinds of risks to take. When I obtained my doctorate in experimental psychology, for example, it would have been relatively un-risky in many ways to get a tenure-track faculty position. Instead, I chose managing a research project on the psychology of aging at Harvard Med School. To be sure, this is far less than the risk that some people take when; e.g., joining “Doctors without borders” or sinking all their life savings (along with all the life savings of their friends and relatives) into a start-up. 

At the time, I was married and had three small children. Under these circumstances, I would not have felt comfortable having no guaranteed income. On the other hand, I was quite confident that I could write a grant proposal to continue to get funded by “soft money.” Indeed, I did write such a proposal along with James Fozard and Nancy Waugh who were at once my colleagues, my bosses, and my mentors. Our grant proposal was not funded or rejected but “deferred” and then it was deferred again. At that point, only one month of funding remained before I would be out of a job. I began to look elsewhere. In retrospect, we all realized it would have been much wiser to have a series of overlapping grants so that all of our “funding eggs” were never in one “funding agency’s basket.” 

brown chicken egg

Photo by Pixabay on Pexels.com

I began looking for other jobs and had a variety of offers from colleges, universities, and large companies. I chose IBM Research. As it turned out, by the way, our grant proposal was ultimately funded for three years, but we only found out after I had already committed to go to IBM. During this job search, I was struck by something else. My dissertation had been on problem solving but my “post-doc” was in the psychology of aging. So far as I could tell, this didn’t bother any of the interviewers in industry in the slightest. But it really freaked out some people in academia. It became clear that one was “expected” in academia, at least by many, that you would choose a specialty and stick with it. Perhaps, you need not do that during your entire academic career, but anything less than a decade smacked of dilettantism. At least, that was how it felt to me as an interviewee. By contrast, it didn’t bother the people who interviewed me at Ford or GM that I knew nothing more than the average person about cars and had never really thought about the human factors of automobiles. 

Photo by Pixabay on Pexels.com
Photo by Pixabay on Pexels.com
Photo by Pixabay on Pexels.com
Photo by Pixabay on Pexels.com

The industrial jobs paid more than the academic jobs and that played some part in my decision. The job at GM sounded particularly interesting. I would be “the” experimental psychologist in a small inter-disciplinary group of about ten people who were essentially tasked with trying to predict the future. The “team” included an economist, a mathematician, a social psychologist, and someone who looked for trends in word frequencies in newspapers. The year was 1973 and US auto companies were shocked and surprised to learn that their customers suddenly cared about gas mileage! These companies didn’t want to be shocked and surprised like that again. The assignment reminded me of Isaac Asimov’s fictional character in the Foundation Trilogy — Harry Seldon — who founded “psychohistory.” We had the chance to do it in “real life.” It sounded pretty exciting! 

antique auto automobile automotive

Photo by Pixabay on Pexels.com

On the other hand, cars seemed to me to be fundamentally an “old” technology while computers were the wave of the future. It also occurred to me that a group of ten people from quite different disciplines trying to predict the future might sound very cool to me and apparently to the current head of research at GM, but it might seem far more dispensable to the next head of research. The IBM problem that I was to solve was much more fundamental. IBM saw that the difficulty of using computers could be a limiting factor in their future growth. I had had enough experience with people — and with computers — to see this as a genuine and enduring problem for IBM (and other computer companies); not as a problem that was temporary (such as the “oil crisis” appeared to be in the early 70’s). 

airport business cabinets center

Photo by Pixabay on Pexels.com

There were a number of additional reasons I chose IBM. IBM Research’s population at the time showed far more diverse than that of the auto companies. None of them were very diverse when it came to male/female ratios. At least IBM Research did have people from many different countries working there and it probably helped their case that an IBM Researcher had just been awarded a Nobel Prize. Furthermore, the car company research buildings bored me; they were the typical rectangular prisms that characterize most of corporate America. In other words, they were nothing special. Aero Saarinen however, had designed the IBM Watson Research Lab. It sat like an alien black spaceship ready to launch humanity into a conceptual future. It was set like an onyx jewel atop the jade hills of Westchester. 

I had mistakenly thought that because New York City was such a giant metropolis, everything north of “The City” (as locals call it) would be concrete and steel for a hundred miles. But no! Westchester was full of cut granite, rolling hills, public parks of forests marbled with stone walls and cooled by clear blue lakes. My commute turned out to be a twenty minute, trafficless drive through a magical countryside. By contrast, since Detroit car companies at that time held a lot of political power, there was no public transportation to speak of in the area. Everyone who worked at the car company headquarters spent at least an hour in bumper to bumper traffic going to work and another hour in bumper to bumper traffic heading back home. In terms of natural beauty, Warren Michigan just doesn’t compare with Yorktown Heights, NY. Yorktown Heights even smelled better. I came for my interview just as the leaves began painting their autumn rainbow palette. Westchester roads even seemed more creative. They wandered through the land as though illustrative of Brownian motion, while Detroit area roads were as imaginative as graph paper. Northern Westchester county sports many more houses now than it did when I moved there in late 1973, but you can still see the essential difference from these aerial photos. 

YorktownHts-map

Warren-map

The IBM company itself struck me as classy. It wasn’t only the Research Center. Everything about the company stated “first class.” Don’t get me wrong. It wasn’t a trivial decision. After grad school in Ann Arbor, a job in Warren kept me in the neighborhood I was familiar with. A job at Ford or GM meant I could visit my family and friends in northern Ohio much more easily as well as my colleagues, friends and professors at the U of M. The offer from IBM felt to me like an offer from the New York Yankees. Of course, going to a top-notch team also meant more difficult competition from my peers. I was, in effect, setting myself up to go head to head with extremely well-educated and smart people from around the world. 

You also need to understand that in 1973, I would be only the fourth Ph.D. psychologist in a building filled with physicists, mathematicians, computer scientists, engineers, and materials scientists. In other words, nearly all the researchers considered themselves to be “hard scientists” who delved in quantitative realms. This did not particularly bother me. At the time, I wanted very much to help evolve psychology to be more quantitative in its approach. And yet, there were some nagging doubts that perhaps I should have picked a less risky job in a psychology department. 

The first week at IBM, my manager, John Gould introduced me yet another guy named “John” —  a physicist whose office was near mine on aisle 19. This guy had something like 100 patents. A few days later, I overheard one of John’s younger colleagues in the hallway excitedly describing some new findings. Something like the following transpired: 

“John! John! You can’t believe it! I just got these results! We’re at 6.2 x 10 ** 15th!” 

His older colleague replied, “Really? Are you sure? 6.2 x 10 ** 15th?” 

John’s younger colleague, still bubbling with enthusiasm: “Yes! Yes! That’s right. You know. Within three orders of magnitude one way or the other!” 

I thought to myself, “three orders of magnitude one way or the other? I can manage that! Even in psychology!” I no longer suffered from “physics envy.” I felt a bit more confident in the correctness of my decision to jump into these waters which were awash with sharp-witted experts in the ‘hard’ sciences. It might be risky, but not absurdly risky.

person riding bike making trek on thin air

Photo by Pixabay on Pexels.com

Of course, your mileage may differ. You might be quite willing to take a much riskier path or a less risky one. Or, maybe the physical location or how much of a commute is of less interest to you than picking the job that most advances your career or pays the most salary. There’s nothing wrong with those choices. But note what you actually feel. Don’t optimize in a sequence of boxes. That is, you might decide that your career is more important than how long your commute is. Fair enough. But there are limits. Imagine two jobs that are extremely similar and one is most likely a little better for your career but you have to commute two hours each way versus 5 minutes for the one that’s not quite so good for your career. Which one would you pick? 

In life beyond tennis and beyond football, one also has to realize that your assessment of risk is not necessarily your actual risk. Many people have chosen “sure” careers or “sure” work at an “old, reliable” company only to discover that the “sure thing” actually turned out to be a big risk. I recall, for example, reading an article in INC., magazine that two “sure fire” small businesses were videotape rental stores and video game arcades. Within a few years of that article, they were almost sure-fire losers. Remember Woolworths? Montgomery Ward?

At the time I joined IBM it was a dominant force in the computer industry. But there are no guarantees — not in career choices, not in tennis strategy, not in football strategy, not in playing the “prevent defense” when it comes to America. The irony of trying too hard to “play it safe” is illustrated this short story about my neighbor from Akron: 

police army commando special task force

Photo by Somchai Kongkamsri on Pexels.com

Wilbur’s Story

Wilbur’s dead. Died in Nam. And, the question I keep wanting to ask him is: “Did it help you face the real dangers? All those hours together we played soldier?”

Wilbur’s family moved next door from West Virginia when I was eleven. They were stupendously uneducated. Wilbur was my buddy though. We were rock-fighting the oaks of the forest when he tried to heave a huge toaster-oven sized rock over my head. Endless waiting in the Emergency Room. Stitches. My hair still doesn’t grow straight there. “Friendly fire.”

More often, we used wooden swords to slash our way through the blackberry and wild rose jungle of The Enemy; parry the blows of the wildly swinging grapevines; hide out in the hollow tree; launch the sudden ambush.

We matched strategy wits on the RISK board, on the chess board, plastic soldier set-ups. I always won. Still, Wilbur made me think — more than school ever did.

One day, for some stupid reason, he insisted on fighting me. I punched him once (truly lightly) on the nose. He bled. He fled crying home to mama. Wilbur couldn’t stand the sight of blood.

I guess you got your fill of that in Nam, Wilbur.

After two tours of dangerous jungle combat, he was finally to ship home, safe and sound, tour over — thank God!

He slipped on a bar of soap in the shower and smashed the back of his head on the cement floor.

Wilbur finally answers me across the years and miles: “So much for Danger, buddy,” he laughs, “Go for it!”

Thanks, Wilbur.

Thanks.

—————————————-

And, no, I will not be giving away the keys to the kingdom. Your days of fighting for freedom may be over. Mine have barely begun.


Author Page on Amazon

Support Both Flow & Breakdown

21 Monday May 2018

Posted by petersironwood in America, management, psychology, Uncategorized

≈ 3 Comments

Tags

collaboration, contextual design, Design, environment, error messages, HCI, human factors, learning, pattern language, pliant systems, politics, usability

Support Both Flow & Breakdown

IMG_4663

Prolog/Acknowledgement/History: 

Only a few days after moving into our San Diego home (with a beautiful drip-irrigated garden), I glanced outside to see a geyser sprouting about ten feet into the air. San Diego can only survive long term if people conserve water! Yet, here we were — wasting water. I rushed outside to turn off the sprinkler system. As I ran to the controller, I noted in passing that the nearby yard lay soaked with pools of water. I turned off the sprinklers — except for the geyser which continued its impersonation of “Old Faithful.” I tried turning the valve on that particular sprinkler and did manage in that way to completely soak myself but the water waste continued unabated. We called the gardener who knew and explained the location of the shutoff valve for the entire house and garden. Later, he came and replaced the valve with a newer type. The old type, which had failed, failed by being stuck in the fully ON position!

Often in the course of my life, I have been frustrated by interacting with systems — whether human or computer — that were clearly designed with a different set of circumstances than the one I found myself in at the time. In a sense, the Pattern here is a specific instance of a broader design Pattern: Design for Broad Range of Contexts. The specific example that I want to focus on in this Pattern is that design should support the “normal” flow of things when they are working well, but also be designed to support likely modes of breakdown.

During the late 1970’s, I worked with Ashok Malhotra and John Carroll at IBM Research on a project we called “The Psychology of Design.” We used a variety of methods, but one was observing and talking with a variety of designers in various domains. One of the things we discovered about good designers was a common process that at first seemed puzzling. Roughly speaking, designers would abstract from a concrete situation, a set of requirements. They would then create a design that logically met all the requirements. Since we were only studying design and not the entire development process (which might include design, implementation, debugging, etc.) it might seem that the design process would end at that point. After all, the designer had just come up with a design that fulfilled the requirements.

What good designers actually did however, at least on many occasions, was to take their abstract design and imagine it operating back in the original concrete situation. When they imagined their design working in this concrete reality they often “discovered” additional requirements or interactions among design elements or requirements that were overlooked in the initial design. While unanticipated effects can occur in purely physical systems, (e.g., bridges flying apart from the bridge surface acting like a wing; O-rings cracking at sufficiently cold temperatures), it seems that human social systems are particularly prone to disastrous designs that “fulfill” the requirements as given.

woman in white wedding gown near orange car

Photo by Slobodan Jošić on Pexels.com

 

The Pattern here specifically focuses on one very common oversight. Systems are often designed under the assumption that everything in the environment of the system is working as it “should” or as intended. This particular type of breakdown was featured in an important theoretical paper authored by Harris and Henderson and presented at CHI 99. That paper claimed systems should be “pliant” rather than rigid. A common example most readers have had with a non-pliant system is to call an organization and be put into an automated call-answering system that does not have the appropriate category anywhere for the current situation but still does not have a way to get through to a human operator.

A telling example from their CHI Proceedings article is that of a paper-based form that was replaced with a computerized system with fixed fields. So, for example, there were only so many characters for various address fields. When someone needed to make an exception to the address syntax with a paper form, it was easy. They could write: “When it’s time to ship the package, please call this number to find out which port the Captain will be in next and ship it there: 606-555-1212.” In the computerized form, this was impossible. In fact, there were so many such glitches that the workers who actually needed to get their work done used the “required” “productivity-enhancing” computer system and also duplicated everything in the old paper system so that they could actually accomplish their tasks.

As part of the effort (described in the last blog post) to get IBM to pay more attention to the usability of its products, we pushed to make sure every development lab had a usability lab that was adequately equipped and staffed. This was certainly a vital component. However, usability in the lab did not necessarily ensure usability in the field. There are many reasons for that and I collaborated with Wendy Kellogg in the late 1980’s to catalog some of those. This effort was partly inspired by a conversation with John Whiteside, who headed the usability lab for Digital Equipment Corporation. They brought people who used a word processor into their usability lab and made numerous improvements in the interface. One day he took some of the usability group out to observe people using the text editor in situ in a manuscript center. They discovered that the typists spent 7 hours every day typing and 1 hour every day counting up, by hand, the number of lines that they had typed that day (which determined their pay). Of course, it was now immediately obvious how to improve productivity by 14%. The work of this group seems to have been inspirational for Beyer & Holtzblatt’s  Contextual Design as well as the Carroll & Kellogg (1989) paper on “Artifact as Theory Nexus.”

fire portrait helmet firefighter

Photo by Pixabay on Pexels.com

 

Author, reviewer and revision dates: 

Created by John C. Thomas in May, 2018

fullsizeoutput_17

Related Patterns: 

Reality Check, Who Speaks for Wolf?

Abstract: 

When designing a new system, it is easy to imagine a context in which all the existing systems that might interact with the new system will operate “normally” or “properly.” In order to avoid catastrophe, it is important to understand what reasonably likely failure modes might be and to design for those as well.

Context: 

For people to design systems, it is necessary to make some assumptions that separate the context of the design from what is being designed. There is a delicate balance. If you define the problem too broadly, you run the risk of addressing a problem that is too intractable, intellectually, logistically or financially. On the other hand, if you define the problem too narrowly, you run the risk of solving a problem that is too special, temporary, or fragile to do anyone much good.

In the honest pursuit of trying to separate out the problem from the context, it happens that one particular form of simplification is particularly popular. People assume that all the systems that will touch the one they are designing will not fail. That often includes human beings who will interact with the system. Such a design process may also presume that electrical power will never be interrupted or that internet access will be continuous.

Systems so designed may have a secondary and more insidious effect. By virtue of having been designed with no consideration to breakdowns, the system will tend to subtly influence the people and organizations that it touches not to prepare for such breakdowns either.

Problem:

When the systems that touch a given system do fail, which can always happen, if no consideration has been given to failure modes, the impact can be disastrous. Most typically, when the system has not been designed to deal with breakdowns, the personnel selection, training, and documentation also fail to deal with breakdowns. As a result, not only are the mechanisms of the systems unsuited to breakdowns; the human organization surrounding the breakdown is also unprepared. Not only is there a possibility of immediate catastrophe; the organization is unprepared to learn. As a result, mutual trust within and of the organizations around the system are also severely damaged.

architecture building fire exit ladders ladder

Photo by Photo Collections on Pexels.com

Forces:

  • Design is a difficult and complex activity and the more contingencies and factors that are taken into account, the more difficult and complex the design activity becomes.
  • Not every single possibility can be designed for.
  • People working on a design have a natural tendency to “look on the bright side” and think about the upside benefits of the system.
  • People who try to “sell” a new system stress its benefits and tend to avoid talking about its possible failures.
  • It is uncomfortable to think about possible breakdowns.
  • When anticipated breakdowns occur, the people in relevant organizations tend to think about how to fix the situation and reduce the probability or impact of breakdowns for the future.
  • When unanticipated breakdowns occur, the people in relevant organizations tend to try to find the individual or individuals responsible and blame them. This action leaves the probability and impact of future breakdowns unimproved.
  • When people within an organization are blamed for unanticipated system failure, it decreases trust of the entire organization as well as mutual trust within the organization.

* Even when consideration of support for breakdown modes is planned for, it is often planned for late in an ambitious schedule. The slightest slippage will often result in breakdowns being ignored.

Solution:

When designing a system, make sure the design process deals adequately with breakdown conditions as well as the “normal” flows of events. The organizations and systems that depend on a system also need to be designed to deal with breakdowns. For example, people should be trained to recognize and deal with breakdowns. Organizations should have a process in place (such as the After Action Review) to learn from breakdowns. Having a highly diverse design team may well improve the chances of designing for likely breakdowns. 

Resulting Context:

Generally speaking, a system designed with attention to supporting both the “normal” flow of events and likely breakdown modes will result in a more robust and resilient system. Because the system design takes these possibilities into account, it also makes it likely that documentation and training will also help people prepare for breakdowns. Furthermore, if breakdowns are anticipated, it also makes it easier for the organization to learn about how to help prevent breakdowns and to learn, over time, to improve responses to breakdowns. There is a further benefit; viz., that mutual trust and cooperation will be less damaged in a breakdown. The premise that breakdowns will happen, puts everyone more in the frame of mind to learn and improve rather than simply blame and point fingers.

fullsizeoutput_12e0

Examples: 

1. Social Networking sites were originally designed to support friends sharing news, information, pictures, and so on. “Flow” is when this is what is actually going on. Unfortunately, as we now know, social media sites can also not work as intended, not because there are “errors” in the code or UX of the social media systems but because the social and political systems that form the context for these systems have broken down. The intentional misappropriation of an application or system is just one of many types of breakdowns that can occur.

2. When I ran the AI lab at NYNEX in the 1990’s, one of the manufacturers of telephone equipment developed a system for telephone operators that was based on much more modern displays and keyboards. In order to optimize performance of the system, the manufacturer brought in representative users; in this case, telephone operators. They redesigned the workflow to reduce the number of keystrokes required to perform various common tasks. At that time, operators were measured in terms of their “Average Work Time” to handle calls.

In this particular case, the manufacturer had separated the domain into what they were designing for (namely, the human-machine interface between the telephone operator and their terminal) from the context (which included what the customer did). While this seemed seemed like a reasonable approach, it turned out when the HCI group at NYNEX studied the problem with the help of Bonnie John, the customer’s behavior was actually a primary determiner of the overall efficiency of the call. While it was true that the new process required fewer keystrokes on the part of the telephone operator, these “saved” keystrokes occurred when the customer, not the telephone operator, was on the critical path. In other words, the operator had to wait for the customer any way, so one or two fewer keystrokes did not impact the overall average work time. However, the suggested workflow involved an extra keystroke that occurred when the operator’s behavior was on the critical path. As it turned out, the “system” that needed to be redesigned was not actually the machine-user system but the machine-user-customer system. In fact, the biggest improvement in average work time came from changing the operator’s greeting from “New York Telephone. How can I help you?” to “What City Please?” The latter greeting tended to produce much more focused conversation on the part of the customer.

Just to be clear, this is an example of the broader point that some of the most crucial design decisions are not about your solution to the problem you are trying to solve but your decision about what the problem is versus what part of the situation you decide is off-limits; something to ignore rather than plan for. A very common oversight is to ignore breakdowns, but it’s not the only one.

black rotary telephone beside beige manekin

Photo by Reynaldo Brigantty on Pexels.com

3. In a retrospective analysis of the Three-Mile Island Nuclear Meltdown, many issues in bad human factors came to light. Many of them had to do with an insufficient preparation for dealing with breakdowns. I recall three instances. First, the proper functioning of many components was shown by a red indicator light being on. When one of the components failed, it was indicated by one of a whole bank of indicator lights not being on. This is not the most salient of signals! To me, it clearly indicates a design mentality steering away from thinking seriously about failure modes. This is not surprising because of the fear and controversy surrounding nuclear power. Those who operate and run such plants do not want the public, at least, to think about failure modes.

Second, there was some conceptual training for the operators about how the overall system worked. But that training was not sufficient for real time problem solving about what to do. In addition, there were manuals describing what to do. But the manuals were also not sufficiently detailed to describe precisely what to do.

Third, at one critical juncture, one of the plant operators closed a valve and “knew” that he had closed it because of the indicator light next to the valve closure switch. He then based further actions on the knowledge that the valve had been closed. Guess what? The indicator light showing “value closure” was not based on feedback from a sensor at the site of the valve. No. The indicator light next to the switch was lit by a collateral current from the switch itself.  All it really showed was that the operator had changed the switch position! Under “normal” circumstances, there is a perfect correlation between the position of the switch and the position of the valve. However, under failure mode, this was no longer true.

accident action danger emergency

Photo by Pixabay on Pexels.com

4. The US Constitution is a flexible document that takes into account a variety of failure modes. It specifies what to do, e.g., if the President dies in office and has been amended to specify what to do if the President is incapacitated. (This contingency was not really specified in the original document). The Constitution presumes a balance of power and specifies that a President may be impeached by Congress for treasonous activity. It seems the US Constitution, at least as amended, has anticipated various breakdowns and what to do about them.

There is one kind of breakdown, however, that the U.S. Constitution does not seem to have anticipated. What if society becomes so divided, and the majority of members in Congress so beholden to special interests, that they refuse to impeach a clearly treasonous President or a President clearly incapacitated or even under the obvious influence of one or more foreign powers? Unethical behavior on the part of individuals in power is a breakdown mode clearly anticipated in the Constitution. But it was not anticipated that a large number of individuals would simultaneously be unethical enough to put party over the general welfare of the nation.  Whether this is a recoverable oversight remains to be seen. If democracy survives the current crisis, the Constitution might be further amended to deal with this new breakdown mode.

5. In IT systems, the error messages that are shown to end users are most often messages that were originally designed to help developers debug the system. Despite the development of guidelines about error messages that were developed over a half century ago, these guidelines are typically not followed. From the user’s perspective, it appears as though the developers know that something “nasty” has just happened and they want to run away from it as quickly as possible before anyone can get blamed. They remind me of a puppy who just chewed up their master’s slippers and knows damned well they are in trouble. Instead of “owning up” to their misbehavior, they hide under the couch.

Despite the many decades of pointing out how useless it is to get an error message such as “Tweet not sent” or “Invalid Syntax” or “IOPS44” such messages still abound in today’s applications. Fifty years ago, when most computers had extremely limited storage, there may have been an excuse to print out succinct error messages that could be looked up in a paper manual. But today? Error messages should minimally make it clear that there is an error and how to recover from it. In most cases, something should be said as well as to why the error state occurred. For instance, instead of “Tweet not sent” a message might indicate, “Tweet not sent because an included image is no longer linkable; retry with new image or link” or “Tweet not sent because it contains a potentially dangerous link; change to allow preview” or “Tweet not sent because the system timed out; try again. If the problem persists, see FAQs on tweet time-out failures.” I haven’t tested these so I am not claiming they are the “right” messages, but they have some information.

Today’s approach to error messages also has an unintended side-effect. Most computer system providers now presume that most errors will be debugged and explained on the web by someone else. This saves money for the vendor, of course. It also gives a huge advantage to very large companies. You are likely to find what an error message means and how to fix the underlying issue on the web, but only if it is a system that already has a huge number of users. Leaving error message clarification to the general public advantages the very companies who have the resources to provide good error messages themselves and keeps entrenched vendors entrenched.

slippery foot dangerous fall

Photo by Pixabay on Pexels.com

References: 

Alexander, C., Ishikawa, S., Silverstein, M., Jacobsen, M., Fiksdahl-King, I. and Angel, S. (1977), A Pattern Language: Towns, Buildings, Construction. New York: Oxford University Press.

Beyer, Hugh and Holtzblatt, Karen (1998): Contextual design: defining customer-centered systems. San Francisco: Elsevier.

Carroll, J., Thomas, J.C. and Malhotra, A. (1980). Presentation and representation in design problem solving. British Journal of Psychology/,71 (1), pp. 143-155.

Carroll, J., Thomas, J.C. and Malhotra, A. (1979). A clinical-experimental analysis of design problem solving. Design Studies, 1 (2), pp. 84-92.

Carroll, J. and Kellogg, W. (1989), Artifact as Theory-Nexus: Hermeneutics Meets System Design. Proceedings of the ACM Conference on Human Factors in Computing Systems. New York: ACM, 1989.

Casey, S.M. (1998), Set Phasers on Stun: And Other True Tales of Design, Technology, and Human Error. Santa Barbara, CA: Aegean Publishing.

Gray, W. D., John, B. E., & Atwood, M. E. (1993). Project Ernestine: Validating GOMS for predicting and explaining real-world task performance. Human Computer Interaction, 8(3), 237-309.

Harris, J. & Henderson, A. (1999), A Better Mythology for System Design. Proceedings of ACM’s Conference on Human Factors in Computing Systems. New York: ACM.

Malhotra, A., Thomas, J.C. and Miller, L. (1980). Cognitive processes in design. International Journal of Man-Machine Studies, 12, pp. 119-140.

Thomas, J. (2016). Turing’s Nightmares: Scenarios and Speculations about “The Singularity.” CreateSpace/Amazon.

Thomas, J.C. (1978). A design-interpretation analysis of natural English. International Journal of Man-Machine Studies, 10, pp. 651-668.

Thomas, J.C. and Carroll, J. (1978). The psychological study of design. Design Studies, 1 (1), pp. 5-11.

Thomas, J.C. and Kellogg, W.A. (1989). Minimizing ecological gaps in interface design, IEEE Software, January 1989.

Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.


Author Page on Amazon

Find and Cultivate Allies

14 Monday May 2018

Posted by petersironwood in America, management, psychology, Uncategorized

≈ 4 Comments

Tags

allies, Business, collaboration, cooperation, HCI, IBM, organizational change, pattern language, politics, teamwork, usability

Find and Cultivate Allies

IMG_2818

Prolog/Acknowledgement: 

The idea for this Pattern comes from personal experience although I am sure there must be many other writers who make a similar point.

Author, reviewer and revision dates: 

Created by John C. Thomas in May, 2018.

IMG_9320

Abstract: 

Human beings are highly social beings by nature. We work more effectively in groups (for many tasks) and it’s also more pleasurable. In a group of any size and complexity, people will have a large variety of goals and values. To achieve a goal, including but not limited to change within the group itself, it is useful to make common cause with others within the larger group. Whenever it becomes useful to promote social change of any kind, it is important to seek out and then cultivate allies. You will achieve greater success, enjoy the process, and learn much.

Context: 

Complex problems and large problems can often only be solved by groups. Within a large group, there will be many sub-groups and individuals whose motivations, expertise, and values are partially different from those in other sub-groups or from those of other individuals. In order to achieve any kind of goal including but not limited to changes within the group itself, a great deal of knowledge must be brought to bear and a large number of actions will be required. Generally, an individual or a small group will not have the knowledge, power, or resources to take all of these actions.

The variety of goals, values, experiences, and scope of power of various individuals and subgroups within a larger group can be viewed as a resource. The interactions among such individuals be a source of creativity. In addition, in order to accomplish some goal, you may seek and find among these individuals and groups those whose goals are compatible with yours and whose power and resources allows them to do things you cannot do yourself.

Individuals are subject to a variety of perceptual and cognitive illusions and these may be exaggerated by being in a large group. Changing a group, team, organization, corporation, NGO may be even more difficult than changing an individual even if the change would benefit the group, team, organization, corporation or NGO. Within any organization, there come to be entrenched interests that are orthogonal to, or even antithetical to, the espoused purposes of the group.

iPhoneDownloadJan152013 527

Problem:

Over time, organizations eventually begin to behave in ways that are ineffective, inefficient, or even antithetical to their purpose. Whatever the cause, an individual who recognizes these infelicities in the organization will typically not, by acting alone, have the power to change them. Force of habit, custom, the culture, and the entrenched power of others will tend to make change by an individual extremely difficult or impossible despite their pointing out that the current way of doing things is counter-productive.

Forces:

  • People who wield local power in an organization are often afraid that any change will weaken their power.
  • Changing one part of the organization generally means that other parts must also change, at least slightly.
  • What works best for an organization must necessarily change over time because of changes in personnel, society, technology, competition, the environment, and so on.
  • Organizations typically codify the way they currently work by documenting procedures, providing training, incorporating current processes into software systems, floor layouts, and so on.
  • Each person in an organization is typically rewarded according to the performance of a small area of the organization that centers on or near them.

* People within an organization of any size will exhibit large variations in knowledge, skill, values, goals, and the resources available to them.

* In many organizations, a valid reason for continuing to do X is simply to say, “That’s the way we’ve always done it.”

* It is not considered a valid reason for change from doing X to doing Y to simply say, “We’ve never done it this way before.”

* Organizations are therefore prone to continuing along a path long after it is a fruitful, ethical, or lawful path.

Solution:

If a person wishes to change how a large organization does things, they need to find and cultivate potential allies within the organization. Allies may be people who can be convinced that the change is best for something that is best for that individual, their department, the organization as a whole, for society or for life on earth. These allies will have crucial information, power, friends, or resources to help make the change possible.

IMG_1183

Example: 

For two years, in the early 1980’s, I worked in the IBM Office of the Chief Scientist. My main mission was to get IBM as a whole to pay more attention to the usability of its products. No-one worked for me. I had no budget. I did, however, have the backing of the Chief Scientist, Lewis Branscomb. Among his powers, at the time, was the ability to “Non-Concur” with the proposed plans of other parts of IBM. This meant that if other IBM divisions did not have usability labs or adequate staff, the Chief Scientist could block the approval of those plans. Lewis himself was a great ally because he had a lot of personal credibility due to his brilliance. Having the power to block the plans of other divisions was also critical.

IBM at this time already had some Human Factors Labs who had done excellent work for years. However, there were large areas such as software that were mainly untested. In addition, most of IBM’s users were technical people and many of the usability tests had been done on other technical people. This had been appropriate but with the extension of computing into other areas of life, many of IBM’s “end users” were now people with little technical computer background. This included administrative assistants and clerks; even chemists, physicists, MD’s, lawyers and other people with advanced education found IBM products hard to use. None of these fairly new groups of users had typically used computers much or had been taught their use in their schooling.

I needed to find allies because the changes that were necessary to IBM were widespread. One important ally was already provided: Tom Wheeler had a similar position to mine within another corporate staff organization called “Engineering, Products, and Technology.” Tom could also get his boss to non-concur with the plans of divisions who were unwilling to “get on board” with the changes. But I needed more allies.

One obvious source of allies were the existing Human Factors Groups. Where they existed, they were typically staffed and managed by excellent people; however, they were often understaffed and often brought in near the end of the development cycle. In many cases, only their advice on “surface features” or documentation could be incorporated into the product. This was frustrating to them. They knew they could be more effective if they were brought in earlier. Often, this did happen, but typically because they had developed personal reputations and friendships (allies) within their organization. It was not mandated by the development process.

Who else would benefit from more usable IBM products? There’s a long list! A lot of “power” within IBM came from Sales and Marketing. The founder, Thomas J. Watson was himself primarily a sales and marketer. Most of the CEO’s had been from this function of the organization. Many in Sales and Marketing were beginning to see for themselves that IBM products were frustrating customers. Finding people within such organizations who were willing to stand up and “be counted” was critical. It was especially useful to find some allies in Europe who were on board with suggested changes. In many countries in Europe, there were various social and legal constraints that gave even more weight to having products that did not cause mental stress, repetitive motion injuries, eyestrain, hearing loss and so on.

In many parts of IBM, there were also “Product Assurance” organizations that required products to be tested before final release. In this case, two simple but crucial and fundamental changes needed to be made. Again, people who worked in Product Assurance wanted these changes to be made. First, we needed to convince development to work with Product Assurance earlier rather than later so that any problems would not be the cause of product announcement slippages (or ignored). Second, we needed to convince Product Assurance to test the procedures and documentation with people outside the development teams. Current practice was often for the Product Assurance people to watch people on the development team “follow” the documented process to ensure that it actually worked. The problem with this process is that language is ambiguous. The people on the development team already knew how to make the product work, so they would interpret every ambiguity in instructions in the “proper” way. IBM customers and users, however, would have no way of knowing how to resolve these ambiguities. Instead of making sure that the documentation was consistent with a successful set-up, the process was changed to see whether documentation actually resulted in a successful set-up when attempted by someone technically appropriate but outside the development team.

fullsizeoutput_1164

People within IBM product divisions did care about budgets. Adding human factors professionals to existing labs or, in some cases, actually setting up new labs, would obviously cost money. We needed to show that they would save money, net. Some of the human factors labs had collected convincing data indicating that many service calls done at IBM’s expense were not due to anything actually being wrong with the product but instead were because the usability of the product was so bad that customers assumed it must not be working correctly. In most cases, fixing the usability of products would save far more money than the additional cost of improving the products.

In some cases, developing allies was a fairly simple business. For example, IBM had a process for awarding faculty grants for academic research relevant to its technologies and products. These were awarded in various categories. Adding a category to deal with human-computer interaction required a single conversation with the person in charge of that program. Similarly, IBM awarded fellowships to promising graduate students in various categories of research. Again, adding the category of human-computer interaction resulted from a single conversation. It should be noted that the ease of doing that resulted much more from the fact that it was known throughout the company that usability was now deemed important and the fact that I worked for the well-respected Chief Scientist than from any particular cleverness on my part.

In at least one case, an ally “fell in my lap.” Part of how I operated was to visit IBM locations around the world and give a talk about the importance of usability for IBM’s success. Generally, these talks were well-received although that did not guarantee any success in getting people to change their behavior. When I gave the talk to the part of IBM that made displays, however, I got a completely hostile reaction. It was clear that the head of the division had somehow made up his mind before I started that it was complete nonsense. I had no success whatever. Only a few months later, the head of this division got an IBM display of his own. He couldn’t get it to work! He did a complete 180 and became an important supporter, through no fault of my own. (Of course, there may have been additional arm-twisting beyond my ken).

iPhoneDownloadJan152013 499

There were also two important, instructive, and inter-related failures in lining up my allies. First, it was very difficult to line up development managers. An IBM developer’s career depended on getting their product “out the door.” Not every product development effort that began resulted in a product being shipped. Once the product was shipped, the development manager was promoted and often went to another division. So, from the development manager’s perspective, the important thing was to get their product shipped. If it “bombed” after shipment, it wasn’t their problem. In order for the product to be shipped, it had to be forecast to make significant net revenue for IBM. No big surprise there! However, these predictions did not take into account actual sales, or the actual cost of sales, or the actual service costs, or even the actual production costs. The only thing that was really known were the development costs. So, for every additional dollar the development manager spent during development, there was one dollar added to the development costs, but also an additional dollar added to predicted service costs and predicted manufacturing costs. Moreover, there were an additional five dollars added to the predicted sales and marketing costs. If they spent an extra dollar doing usability tests, for example, it added not just one but eight dollars to estimated overall costs. Moreover, since IBM was in business to make a profit, an increase of 8 dollars in costs, meant an increase of nearly 20 dollars in projected price. This meant fewer predicted products sold.

In actuality, spending an additional dollar to improve usability of products should reduce service costs and sales and marketing costs. But that is not the formula that was used. The logic of the formula, corroborated by correlational data, was that bigger, more complex products had higher development costs and also had higher service, manufacturing and sales costs. When one compared a mainframe and a PC, this formula made sense. But when used as a decision tool by the development manager, it did not make sense. (By analogy, there is a strong correlation between the size of various species of mammals and their longevity. This, however, does not mean that you will live twice as long if you double your own body weight!).

Recall however, that the development manager’s career did not much depend on how successful the product was after release; it mainly depended on showing that they could get their product shipped. Development managers proved to be difficult to get “on board.” In some cases, despite the organizational pressures, some development managers did care about how the product did; were interested in making their products usable; did spent additional money to improve their product. Making such allies, however, relied on appealing to their personal pride of ownership or convincing them it was best for the company.

Some development managers suggested that perhaps I could get the Forecasters to change their formula so that they would be given credit for higher sales to balance the projected increase in price (and attendant reduction in sales volume forecasts). It would have been an excellent leverage point to have gotten the Forecasting function as an ally. I was not, however, sufficiently wise to accomplish this.

The organizational payoff matrix for the forecaster was quite skewed toward being conservative. If they used the existing formula and ended up thereby “killing” a product by reducing the sales forecast because of the money spent improving usability, no-one would ever find out that the forecaster might have erred. On the other hand, if I had convinced them by giving them evidence (which would necessarily be quite indirect) that the product, by virtue of its being more usable, would therefore sell many more units, there were at least two logical possibilities. First, I might be right and the product would be a success. The forecaster would have done the right thing and would keep their job (but not be likely to receive any special recognition, promotion, or raise). Second, I might be wrong (for a variety of reasons having nothing to do with usability such as unexpected competition or unexpected costs) and the product might tank. In that case, the company would lose a lot of money and the forecaster might well lose their job. While I occasionally found development managers I could convince to be allies because I could get them to value making the most excellent product over their own career, I never was able to gain any allies in the Forecasting function. In retrospect, I think I didn’t take sufficient time to discover the common ground that it would have taken to get them on board.

IMG_3121

Resulting Context:

Finding allies will often enable the organization to change in ways that will benefit the organization as a whole and most of the individuals and sub-groups within it. If done with the best interests of the organization in mind, it should also increase internal mutual trust.

There is a related Anti-Pattern which is finding allies, not to change the organization in a positive way, but to subvert the organization. If, instead of trying to make IBM be more effective by making its products more usable, I had tried to ruin it by finding allies who, in the process of ruining IBM would also profit personally, that would have been highly unethical. Such a process, even if it ultimately failed, would decrease internal mutual trust and decrease the effectiveness of the organization. Of course, one could imagine that some competitor of IBM (or of a government or team) might try to destroy it from the inside out by favoring the promotion of those who would put their own interests ahead of the company or its customers. Finding allies is likely to be ethical when it is for the best interests of the overall organization and all its stakeholders and if is a known initiative (as was the case for improving the usability of IBM products).

References: 

https://gps.ucsd.edu/faculty-directory/lewis-branscomb.html

Branscomb, L. and Thomas, J. (1984). Ease of use: A system design challenge. IBM Systems Journal, 23 (3), pp. 224-235.

Thomas, J. (1984) Organizing for human factors. In Y. Vassilou (Ed.) Human factors and interactive computing systems. Norwood, NJ: Ablex.

Thomas, J.C. (1985). Human factors in IBM. Proceedings of the Human Factors Society 29th Annual Meeting.  Santa Monica, CA: Human Factors Society. 611-615. 

——————————————————

Author Page on Amazon: https://www.amazon.com/author/truthtable

The Fault is in Defaults

25 Friday Jul 2014

Posted by petersironwood in Uncategorized

≈ Leave a comment

Tags

Customer experience, defaults, google maps, HCI, printer, scanner, user experience, UX

“The fault, dear Brutus, is not in our stars,
But in ourselves, that we are underlings.”

So Cassius says to Brutus in Shakespeare’s play, Julius Caesar Cassius was trying to convince Brutus to join the plot to assassinate Caesar. As I recall, things did not turn out well for Julius Caesar. Or for Brutus. Or for Cassius. Or, ultimately, for Mark Anthony either, but that’s another story. The point is that there is always an interesting tension between imagining that we ourselves are the masters of our fate. It is our ability, or attitude, or grit, or whatever that determines how much money or happiness or health we have. Or, on the other hand, there is the view that things are pretty much beyond our conscious control and due to our heredity, our environment, our upbringing, etc. Both views are partly true and both have their place. If you are a user of a product and you want to get something accomplished, blaming the stupid product will not help you accomplish your goals. On the other hand, if you are a product developer, it will not help you to blame your user. You need to design thoughtfully.

I was reminded of this debate today by trying to scan a document. In general, I am amazed how excellentscanners and printers are today, not to mention CHEAP! I was born in an era of expensive, heavy, noisy, dot matrix printers or teletypes. You’ve come a long way, baby! But the software that actually lets us use these marvelous machines. Hmmm. Here there is a lot of room for improvement. Today, I repeatedly tried to scan a one page document to no avail. I thinkI finally diagnosed what the problem was. The scan screen came up with a default that said “custom size” and the defaulted “custom dimensions” were 0 by 0. Because, obviously, the development team had done a thorough study of users and found, I suppose somewhat surprisingly, that the most common size of image people wanted to scan was 0 by 0. I suppose such images have the advantage that you can store many more of them on your hard drive than images that are 8.5 by 11 inches or 3 inches by 5 inches, say.

But this is not an isolated example. Often there are “defaults” which seem to me to be rather odd, to say the least. Right now, my google map application, for no discernible reason, has decided that a good default location for me is the geographical center of the continental United States. It was not “born” with this default but somewhere along the line “developed” it. Why? I have never travelled (knowingly) to the geographical center of the United States. I have never wanted to “find” the geographical center of the United States. Yet, for some mysterious reason, whenever I do try to find a route to say, the dentist who is ten miles away, the map app tries to send me from Southern California to the geographic center of the US and then back again. I can eventually get around this, but next time I open up the app, there we are again. Of course, I am tempted every time to just to see the place (near the corners of Oklahoma, Kansas, Arkansas and Missouri. And, “with no traffic”, it only takes a little over 22 hours to get there. The phrase, “with no traffic” in Southern California is equivalent to “when pigs fly.” So, tempting as it is to drive 22 hours to the geographical center of the US and then 22 hours back (provided the sky if filled with flying pigs) in order to go to the dentist who is a few minutes away, I haven’t yet actually taken the trip.

I am tempted to rant about the absolute ludicrosity of “sponsored links” (which cheeringly informs me that I could take a side trip to a gynecologist on the way to the dentist) but I’ll try to stay on topic. Where do these defaults come from? Is this just a nerd’s nerd free choice as a perk of the job? Do they seriously conceptualize size in terms of a two dimensional grid with an origin at zero zero and therefore this is a “logical” default for paper size? Are they trying to do the user a favor by saving space?

I am hoping there is a product manager out there who can answer these questions. I am hoping things will turn out better than they did for Caesar and Brutus and Cassius.

Newsflash: MUSAK does not compensate for bad customer experience

09 Thursday Jan 2014

Posted by petersironwood in Uncategorized

≈ 1 Comment

Tags

bad music, customer service, HCI, IVR, Musak, UI, UX

Newsflash: Playing really low quality musak while the customer is on hold for 40 minutes DOES NOT improve the customer experience.  Nor, does ALWAYS playing the message that you are experiencing “unusually heavy volumes” right now improve your credibility. Now, I admit that someone in marketing who thought about for about 15 seconds *might* think that playing really bad music would be a good thing.  After all, people do pay money to listen to music.  Not everyone is a pirate.  And, people spend a lot of time listening to music.  Here’s the thing that will come to you if you think about for 20 or 30 seconds though.  People play to listen to the music they choose. They do not pay to hear the music you choose.  Furthermore, people pay to listen to music that is high quality. Granted, sometimes, when nothing else is available some of the people some of the time would prefer low quality music to no music at all. But NO-ONE chooses absurdly bad quality music over silence.  One more thing: unless you are a love-struck pre-teen, you do not listen to the same short sequence of music over and over and over and over for an hour at a time.  No.  You listen to a piece of music.  Then, you listen to a DIFFERENT piece of music.  Then, you listen to a DIFFERENT piece of music.

Now, I do grant that it is somewhat useful if you are going to put your customers on hold for 40 minutes that you give some sort of signal other than complete silence to show that you are still there and haven’t had the system “hang up” on them (which happens all too often but is another topic). But playing loud, obnoxious, very low fidelity music is not the answer.

Back to credibility.  If you are really monitoring the call volume and the customer calls at a time of really unusual high call volume, you may want to tell them that they would have better luck another time.  But if you *always* play this message, what do you think it does to your credibility? I am amazed to find that my credit union, an otherwise fine institution, *always* plays this message.  And every single time, it makes me think twice about whether I can really trust my funds to an organization that clearly lies every single day.

Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • April 2026
  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • dogs
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 662 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...