• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: human factors

In the Brain of the Beholder

17 Tuesday Jul 2018

Posted by petersironwood in America, management, psychology, Uncategorized

≈ 1 Comment

Tags

Design, experiment, HCI, human factors, politics, psychology, science, UX

In the Brain of the Beholder. 

MikeHurdles

Most people in the related fields of “Human Factors”, “User Experience”, and “Human Computer Interaction” learn how to run experiments. Formal study often largely focuses on experimental design and statistics. Indeed, these are important subjects. In today’s post though, I want to relate three experiences with actually running experiments. Just for fun, let’s go in reverse chronological order. 

In graduate school at the University of Michigan Experimental Psychology department, one of my classmates told us about an experiment he had just conducted. Often, we designed experiments in which a strictly timed sequence of stimuli (e.g., printed words, spoken words, visual symbols) were presented and then we measured how long it took the “subject” to respond (e.g., press a lever, say a word). Typically, these stimuli were presented fairly quickly, perhaps 1 every second or at most every 4-5 seconds. This classmate, however, had felt this was too stressful and wanted to make the situation less so for the subjects. So, instead of having the stimuli presented, say, every 4 seconds, my classmate decided to be more humane and make the experiment “self-paced.” In other words, no matter how long the subject took to make a response, the next stimulus would be presented 1 second later. So, how did this “kindness” work out in practice? 

IMG_9172

A few days later, I heard a scream in the lab down the hall and ran in to see whether everyone was okay. One of my classmate’s first subjects had just literally ran out of the experimental room screaming “I can’t take it any more! I quit!” My classmate was flabbergasted. But eventually, he got the subject to calm down and explain why they had been so upset. The subject had begun by responding carefully to the stimuli. So, perhaps they took ten seconds for the first item, and the new stimulus came up one second later. On the second go, they took perhaps 9.5 seconds and then the next stimulus came up one second later. As time went on, the subject responded more and more quickly so the next stimulus also came up more and more quickly. In the subject’s mind, the experiment was becoming more and more difficult as determined by the experimenter. They had no idea that had they slowed back down to responding once every 10 seconds, they’d only be presented with stimuli at that, much slower speed. 

So, here we have one way that these so-called subjects differ from each other. They may not interpret the experiment in the framework in which it is thought of by the experimenter. In this particular case, there was a difference in the attribution of causality, but there are many other possibilities. This is one of many reasons for doing a pilot experiment and talking with the subjects. 

The next earlier example took place at Case-Western Reserve. In my senior year, I was married and had a kid so I worked three part time jobs while going to school full-time. One of the jobs was teaching “Space Science” and “Aeronautics” to some sixth graders at the Cleveland Supplementary Educational Center. Another one of the jobs was as a Research Assistant to a Professor in the Psychology Department. We were doing an experiment with kids in an honest-to-God “Skinner Box.” The kids pulled a lever and won nickels. Meanwhile, on a screen in front of them, there appeared a large red circle and then we looked at how much the kid continued to press the lever (without winning any more nickels) when confronted with the same red circle, a smaller red circle, a red ellipse, etc. 

SolarSystem

There was a small waiting room next to the Skinner Box and that had a greenboard on it. So, since there was another kid waiting there just twiddling his thumbs, I decided to give him a little mini-lecture on the solar system: sun at the center, planets in order, some of the major moons, etc. 

After each kid had finished the experiment, I always asked them what they thought was going on during the experiment. (This was despite the fact that the Professor I was working for was a “strict behaviorist”). When I asked this kid what he thought was going on, he referred back to my lecture about the solar system! 

Oops! Just because the lecture and the experiment were two completely unrelated things in my mind didn’t mean they were for the kid! Of course, they seemed related to him! Both involved circles and they both took place at the same rather unique and unusual place: a psychology laboratory. 

And this too is worth thinking about. We psychologists and Human Factors people typically report on the design of the experiment and hopefully relate the instructions. We, however, do not typically report on a host of other things that we think of as irrelevant but may impact the subject and influence their behavior. Was the receptionist nice to them or rude? What did their friends say about going to do a psychology experiment or a UX study? When the experimenter explained the experiment and asked whether there were any questions, was that a sincere question? Or, was it just a line delivered in a rather mechanical monotone that encouraged the subject not to say a word? 

Of course, the very fact that humans differ so much is why some psychologists prefer to use rats. And, the psychologists (as well as a variety of biologists and medical doctors) don’t just use any old rats. They use rats that are carefully bred to be “lab rats.” They are expected to act in a fairly uniform fashion. And, for the most part, they do.

two gray mice

Photo by Alex Smith on Pexels.com

I was helping my girlfriend with her intro psych project. We were replicating the Yerkes-Dodson Law. This states that as you increase stress, performance improves, but only to a point. After that, additional stress causes performance to deteriorate (something that software development managers would do well to note). One of the ways I helped was to get some of the rats out of their cages. I would open up the top of the cage, reach around the rat behind their next and pull them out. Not a big deal. All the rats were quite placid and easy to handle. They all acted the same. Then, it was time to get the day’s last rat who was to be placed in the “high stress” condition. I went to the cage and opened it just as I had done for the last dozen rats. But instead of sitting there placidly and twitching it’s nose, this rat raced to the bars of his cage and hung on with both of his little legs and both of his little arms with all his might! Which might was not equal to mine but was rather incredible for such a tiny fellow. Rats sometimes squeak rather like a mouse does. But not this one! This carefully bred clone barked! Loudly! Like a dog. Whether this rat had suffered some previous trauma or was subject to some kind of odd mutation, I cannot say. 

But this I can say. Your “users” or “subjects” are not identical to each other. And, while modeling is a very useful exercise, they will never “be” identical to your model. They are always acting and reacting to a reality as beheld by them. And their reality will always be somewhat different from yours. That does not mean, however, that generalizations about people — or rats — are always wrong or that they are never useful. 

It does not mean that gravity will not affect people just because they refuse to believe in it. There really is a reality out there. And, that reality can kill rats or people in an eye blink; especially those who actively refuse to see what is happening before their very eyes. 

halloween2006006

Who knows? You might be about to be placed in the “High Stress” condition no matter how tightly you hang on to the bars of your cage – or, to your illusions.  

————————————-

Author Page on Amazon

Madison Keys, Francis Scott Key, the “Prevent Defense” and giving away the Keys to the Kingdom. 

07 Saturday Jul 2018

Posted by petersironwood in America, family, management, psychology, sports, Uncategorized

≈ 1 Comment

Tags

Business, career, HCI, human factors, IBM, life, school, sports, UX

Madison Keys, Francis Scott Key, the “Prevent Defense” and giving away the Keys to the Kingdom. 

Madison Keys, for those who don’t know, is an up-and-coming American tennis player. In this Friday’s Wimbledon match, Madison sprinted to an early 4-1 lead. She accomplished this through a combination of ace serves and torrid ground strokes. Then, in an attempt to consolidate, or protect her lead, or play the (in)famous “prevent defense” imported from losing football coaches, she managed to stop hitting through the ball – guiding it carefully instead — into the net or well long or just inches wide. 

IMG_2601

Please understand that Madison Keys is a wonderful tennis player. And, her “retreat” to being “careful” and playing the “prevent defense” is a common error that many professional and amateur players fall prey to. It should also be pointed out that what appears to be overly conservative play to me, as an outside observer, could easily be due to some other cause such as a slight injury or, even more likely, because her opponent adjusted to Madison’s game. Whether or not she lost because of using the “prevent defense” no-one can say for sure. But I can say with certainty that many people in many sports have lost precisely because they stopped trying to “win” and instead tried to protect their lead by being overly conservative; changing the approach that got them ahead. 

Francis Scott Key, of course, wrote the words to the American National Anthem which ends on the phrase, “…the home of the brave.” Of course, every nation has stories of people behaving bravely and the United States of America is no exception. For the American colonies to rebel against the far superior naval and land forces (to say nothing of sheer wealth) of the British Empire certainly qualifies as “brave.” 

IMG_8499

In my reading of American history, one of our strengths has always been taking risks in doing things in new and different ways. In other words, one of our strengths has been being brave. Until now. Now, we seem in full retreat. We are plunging headlong into the losing “prevent defense” borrowed from American football. 

American football can hardly be called a “gentle sport” – the risk of injury is ever present and now we know that even those who manage to escape broken legs and torn ligaments may suffer internal brain damage. But there is still the tendency of many coaches to play the “prevent defense.” In case you’re unfamiliar with American football, here is an illustration of the effect of the “prevent defense” on the score. A team plays a particular way for 3 quarters of the game and is ahead 42-21. If you’re a fan of linear extrapolation, you might expect that  the final score might be something like 56-28. But coaches sometimes want to “make sure” they win so they play the “prevent defense” which basically means you let the other team make first down after first down and therefore keep possession of the ball and score, though somewhat slowly. The coach suddenly loses confidence in the method which has worked for 3/4 of the game. It is not at all unusual for the team who employs this “prevent defense” to lose; in this example, perhaps, 42-48. They “let” the other team get one first down after another. 

red people outside sport

Photo by Pixabay on Pexels.com

America has apparently decided, now, to play a “prevent defense.” Rather than being innovative and bold and embrace the challenges of new inventions and international competition, we instead want to “hold on to our lead” and introduce protective tariffs just as we did right before the Great Depression. Rather than accepting immigrants with different foods, customs, dress, languages, and religions — we are now going to “hold on to what we have” and try to prevent any further evolution. In the case of American football, the prevent defense sometimes works. In the case of past civilizations that tried to isolate themselves, it hasn’t and it won’t. 

landscape photography of gray rock formation

Photo by Oleg Magni on Pexels.com

This is not to say that America (or any other country) should right now have “open borders” and let everyone in for every purpose. Nor should a tennis player hit every shot with all their might. Nor should a football team try the riskiest possible plays at every turn. All systems need to strike a balance among replication of what works, providing defense of what one has and exploring what is new and different. That is what nature does. Every generation “replicates” aspects of the previous generation but every generation must also explore new directions. Life does this through sexual selection, mutation, and cross over. 

This balance plays out in career as well. You need to decide for yourself how much and what kinds of risks to take. When I obtained my doctorate in experimental psychology, for example, it would have been relatively un-risky in many ways to get a tenure-track faculty position. Instead, I chose managing a research project on the psychology of aging at Harvard Med School. To be sure, this is far less than the risk that some people take when; e.g., joining “Doctors without borders” or sinking all their life savings (along with all the life savings of their friends and relatives) into a start-up. 

At the time, I was married and had three small children. Under these circumstances, I would not have felt comfortable having no guaranteed income. On the other hand, I was quite confident that I could write a grant proposal to continue to get funded by “soft money.” Indeed, I did write such a proposal along with James Fozard and Nancy Waugh who were at once my colleagues, my bosses, and my mentors. Our grant proposal was not funded or rejected but “deferred” and then it was deferred again. At that point, only one month of funding remained before I would be out of a job. I began to look elsewhere. In retrospect, we all realized it would have been much wiser to have a series of overlapping grants so that all of our “funding eggs” were never in one “funding agency’s basket.” 

brown chicken egg

Photo by Pixabay on Pexels.com

I began looking for other jobs and had a variety of offers from colleges, universities, and large companies. I chose IBM Research. As it turned out, by the way, our grant proposal was ultimately funded for three years, but we only found out after I had already committed to go to IBM. During this job search, I was struck by something else. My dissertation had been on problem solving but my “post-doc” was in the psychology of aging. So far as I could tell, this didn’t bother any of the interviewers in industry in the slightest. But it really freaked out some people in academia. It became clear that one was “expected” in academia, at least by many, that you would choose a specialty and stick with it. Perhaps, you need not do that during your entire academic career, but anything less than a decade smacked of dilettantism. At least, that was how it felt to me as an interviewee. By contrast, it didn’t bother the people who interviewed me at Ford or GM that I knew nothing more than the average person about cars and had never really thought about the human factors of automobiles. 

Photo by Pixabay on Pexels.com
Photo by Pixabay on Pexels.com
Photo by Pixabay on Pexels.com
Photo by Pixabay on Pexels.com

The industrial jobs paid more than the academic jobs and that played some part in my decision. The job at GM sounded particularly interesting. I would be “the” experimental psychologist in a small inter-disciplinary group of about ten people who were essentially tasked with trying to predict the future. The “team” included an economist, a mathematician, a social psychologist, and someone who looked for trends in word frequencies in newspapers. The year was 1973 and US auto companies were shocked and surprised to learn that their customers suddenly cared about gas mileage! These companies didn’t want to be shocked and surprised like that again. The assignment reminded me of Isaac Asimov’s fictional character in the Foundation Trilogy — Harry Seldon — who founded “psychohistory.” We had the chance to do it in “real life.” It sounded pretty exciting! 

antique auto automobile automotive

Photo by Pixabay on Pexels.com

On the other hand, cars seemed to me to be fundamentally an “old” technology while computers were the wave of the future. It also occurred to me that a group of ten people from quite different disciplines trying to predict the future might sound very cool to me and apparently to the current head of research at GM, but it might seem far more dispensable to the next head of research. The IBM problem that I was to solve was much more fundamental. IBM saw that the difficulty of using computers could be a limiting factor in their future growth. I had had enough experience with people — and with computers — to see this as a genuine and enduring problem for IBM (and other computer companies); not as a problem that was temporary (such as the “oil crisis” appeared to be in the early 70’s). 

airport business cabinets center

Photo by Pixabay on Pexels.com

There were a number of additional reasons I chose IBM. IBM Research’s population at the time showed far more diverse than that of the auto companies. None of them were very diverse when it came to male/female ratios. At least IBM Research did have people from many different countries working there and it probably helped their case that an IBM Researcher had just been awarded a Nobel Prize. Furthermore, the car company research buildings bored me; they were the typical rectangular prisms that characterize most of corporate America. In other words, they were nothing special. Aero Saarinen however, had designed the IBM Watson Research Lab. It sat like an alien black spaceship ready to launch humanity into a conceptual future. It was set like an onyx jewel atop the jade hills of Westchester. 

I had mistakenly thought that because New York City was such a giant metropolis, everything north of “The City” (as locals call it) would be concrete and steel for a hundred miles. But no! Westchester was full of cut granite, rolling hills, public parks of forests marbled with stone walls and cooled by clear blue lakes. My commute turned out to be a twenty minute, trafficless drive through a magical countryside. By contrast, since Detroit car companies at that time held a lot of political power, there was no public transportation to speak of in the area. Everyone who worked at the car company headquarters spent at least an hour in bumper to bumper traffic going to work and another hour in bumper to bumper traffic heading back home. In terms of natural beauty, Warren Michigan just doesn’t compare with Yorktown Heights, NY. Yorktown Heights even smelled better. I came for my interview just as the leaves began painting their autumn rainbow palette. Westchester roads even seemed more creative. They wandered through the land as though illustrative of Brownian motion, while Detroit area roads were as imaginative as graph paper. Northern Westchester county sports many more houses now than it did when I moved there in late 1973, but you can still see the essential difference from these aerial photos. 

YorktownHts-map

Warren-map

The IBM company itself struck me as classy. It wasn’t only the Research Center. Everything about the company stated “first class.” Don’t get me wrong. It wasn’t a trivial decision. After grad school in Ann Arbor, a job in Warren kept me in the neighborhood I was familiar with. A job at Ford or GM meant I could visit my family and friends in northern Ohio much more easily as well as my colleagues, friends and professors at the U of M. The offer from IBM felt to me like an offer from the New York Yankees. Of course, going to a top-notch team also meant more difficult competition from my peers. I was, in effect, setting myself up to go head to head with extremely well-educated and smart people from around the world. 

You also need to understand that in 1973, I would be only the fourth Ph.D. psychologist in a building filled with physicists, mathematicians, computer scientists, engineers, and materials scientists. In other words, nearly all the researchers considered themselves to be “hard scientists” who delved in quantitative realms. This did not particularly bother me. At the time, I wanted very much to help evolve psychology to be more quantitative in its approach. And yet, there were some nagging doubts that perhaps I should have picked a less risky job in a psychology department. 

The first week at IBM, my manager, John Gould introduced me yet another guy named “John” —  a physicist whose office was near mine on aisle 19. This guy had something like 100 patents. A few days later, I overheard one of John’s younger colleagues in the hallway excitedly describing some new findings. Something like the following transpired: 

“John! John! You can’t believe it! I just got these results! We’re at 6.2 x 10 ** 15th!” 

His older colleague replied, “Really? Are you sure? 6.2 x 10 ** 15th?” 

John’s younger colleague, still bubbling with enthusiasm: “Yes! Yes! That’s right. You know. Within three orders of magnitude one way or the other!” 

I thought to myself, “three orders of magnitude one way or the other? I can manage that! Even in psychology!” I no longer suffered from “physics envy.” I felt a bit more confident in the correctness of my decision to jump into these waters which were awash with sharp-witted experts in the ‘hard’ sciences. It might be risky, but not absurdly risky.

person riding bike making trek on thin air

Photo by Pixabay on Pexels.com

Of course, your mileage may differ. You might be quite willing to take a much riskier path or a less risky one. Or, maybe the physical location or how much of a commute is of less interest to you than picking the job that most advances your career or pays the most salary. There’s nothing wrong with those choices. But note what you actually feel. Don’t optimize in a sequence of boxes. That is, you might decide that your career is more important than how long your commute is. Fair enough. But there are limits. Imagine two jobs that are extremely similar and one is most likely a little better for your career but you have to commute two hours each way versus 5 minutes for the one that’s not quite so good for your career. Which one would you pick? 

In life beyond tennis and beyond football, one also has to realize that your assessment of risk is not necessarily your actual risk. Many people have chosen “sure” careers or “sure” work at an “old, reliable” company only to discover that the “sure thing” actually turned out to be a big risk. I recall, for example, reading an article in INC., magazine that two “sure fire” small businesses were videotape rental stores and video game arcades. Within a few years of that article, they were almost sure-fire losers. Remember Woolworths? Montgomery Ward?

At the time I joined IBM it was a dominant force in the computer industry. But there are no guarantees — not in career choices, not in tennis strategy, not in football strategy, not in playing the “prevent defense” when it comes to America. The irony of trying too hard to “play it safe” is illustrated this short story about my neighbor from Akron: 

police army commando special task force

Photo by Somchai Kongkamsri on Pexels.com

Wilbur’s Story

Wilbur’s dead. Died in Nam. And, the question I keep wanting to ask him is: “Did it help you face the real dangers? All those hours together we played soldier?”

Wilbur’s family moved next door from West Virginia when I was eleven. They were stupendously uneducated. Wilbur was my buddy though. We were rock-fighting the oaks of the forest when he tried to heave a huge toaster-oven sized rock over my head. Endless waiting in the Emergency Room. Stitches. My hair still doesn’t grow straight there. “Friendly fire.”

More often, we used wooden swords to slash our way through the blackberry and wild rose jungle of The Enemy; parry the blows of the wildly swinging grapevines; hide out in the hollow tree; launch the sudden ambush.

We matched strategy wits on the RISK board, on the chess board, plastic soldier set-ups. I always won. Still, Wilbur made me think — more than school ever did.

One day, for some stupid reason, he insisted on fighting me. I punched him once (truly lightly) on the nose. He bled. He fled crying home to mama. Wilbur couldn’t stand the sight of blood.

I guess you got your fill of that in Nam, Wilbur.

After two tours of dangerous jungle combat, he was finally to ship home, safe and sound, tour over — thank God!

He slipped on a bar of soap in the shower and smashed the back of his head on the cement floor.

Wilbur finally answers me across the years and miles: “So much for Danger, buddy,” he laughs, “Go for it!”

Thanks, Wilbur.

Thanks.

—————————————-

And, no, I will not be giving away the keys to the kingdom. Your days of fighting for freedom may be over. Mine have barely begun.


Author Page on Amazon

Support Both Flow & Breakdown

21 Monday May 2018

Posted by petersironwood in America, management, psychology, Uncategorized

≈ 3 Comments

Tags

collaboration, contextual design, Design, environment, error messages, HCI, human factors, learning, pattern language, pliant systems, politics, usability

Support Both Flow & Breakdown

IMG_4663

Prolog/Acknowledgement/History: 

Only a few days after moving into our San Diego home (with a beautiful drip-irrigated garden), I glanced outside to see a geyser sprouting about ten feet into the air. San Diego can only survive long term if people conserve water! Yet, here we were — wasting water. I rushed outside to turn off the sprinkler system. As I ran to the controller, I noted in passing that the nearby yard lay soaked with pools of water. I turned off the sprinklers — except for the geyser which continued its impersonation of “Old Faithful.” I tried turning the valve on that particular sprinkler and did manage in that way to completely soak myself but the water waste continued unabated. We called the gardener who knew and explained the location of the shutoff valve for the entire house and garden. Later, he came and replaced the valve with a newer type. The old type, which had failed, failed by being stuck in the fully ON position!

Often in the course of my life, I have been frustrated by interacting with systems — whether human or computer — that were clearly designed with a different set of circumstances than the one I found myself in at the time. In a sense, the Pattern here is a specific instance of a broader design Pattern: Design for Broad Range of Contexts. The specific example that I want to focus on in this Pattern is that design should support the “normal” flow of things when they are working well, but also be designed to support likely modes of breakdown.

During the late 1970’s, I worked with Ashok Malhotra and John Carroll at IBM Research on a project we called “The Psychology of Design.” We used a variety of methods, but one was observing and talking with a variety of designers in various domains. One of the things we discovered about good designers was a common process that at first seemed puzzling. Roughly speaking, designers would abstract from a concrete situation, a set of requirements. They would then create a design that logically met all the requirements. Since we were only studying design and not the entire development process (which might include design, implementation, debugging, etc.) it might seem that the design process would end at that point. After all, the designer had just come up with a design that fulfilled the requirements.

What good designers actually did however, at least on many occasions, was to take their abstract design and imagine it operating back in the original concrete situation. When they imagined their design working in this concrete reality they often “discovered” additional requirements or interactions among design elements or requirements that were overlooked in the initial design. While unanticipated effects can occur in purely physical systems, (e.g., bridges flying apart from the bridge surface acting like a wing; O-rings cracking at sufficiently cold temperatures), it seems that human social systems are particularly prone to disastrous designs that “fulfill” the requirements as given.

woman in white wedding gown near orange car

Photo by Slobodan Jošić on Pexels.com

 

The Pattern here specifically focuses on one very common oversight. Systems are often designed under the assumption that everything in the environment of the system is working as it “should” or as intended. This particular type of breakdown was featured in an important theoretical paper authored by Harris and Henderson and presented at CHI 99. That paper claimed systems should be “pliant” rather than rigid. A common example most readers have had with a non-pliant system is to call an organization and be put into an automated call-answering system that does not have the appropriate category anywhere for the current situation but still does not have a way to get through to a human operator.

A telling example from their CHI Proceedings article is that of a paper-based form that was replaced with a computerized system with fixed fields. So, for example, there were only so many characters for various address fields. When someone needed to make an exception to the address syntax with a paper form, it was easy. They could write: “When it’s time to ship the package, please call this number to find out which port the Captain will be in next and ship it there: 606-555-1212.” In the computerized form, this was impossible. In fact, there were so many such glitches that the workers who actually needed to get their work done used the “required” “productivity-enhancing” computer system and also duplicated everything in the old paper system so that they could actually accomplish their tasks.

As part of the effort (described in the last blog post) to get IBM to pay more attention to the usability of its products, we pushed to make sure every development lab had a usability lab that was adequately equipped and staffed. This was certainly a vital component. However, usability in the lab did not necessarily ensure usability in the field. There are many reasons for that and I collaborated with Wendy Kellogg in the late 1980’s to catalog some of those. This effort was partly inspired by a conversation with John Whiteside, who headed the usability lab for Digital Equipment Corporation. They brought people who used a word processor into their usability lab and made numerous improvements in the interface. One day he took some of the usability group out to observe people using the text editor in situ in a manuscript center. They discovered that the typists spent 7 hours every day typing and 1 hour every day counting up, by hand, the number of lines that they had typed that day (which determined their pay). Of course, it was now immediately obvious how to improve productivity by 14%. The work of this group seems to have been inspirational for Beyer & Holtzblatt’s  Contextual Design as well as the Carroll & Kellogg (1989) paper on “Artifact as Theory Nexus.”

fire portrait helmet firefighter

Photo by Pixabay on Pexels.com

 

Author, reviewer and revision dates: 

Created by John C. Thomas in May, 2018

fullsizeoutput_17

Related Patterns: 

Reality Check, Who Speaks for Wolf?

Abstract: 

When designing a new system, it is easy to imagine a context in which all the existing systems that might interact with the new system will operate “normally” or “properly.” In order to avoid catastrophe, it is important to understand what reasonably likely failure modes might be and to design for those as well.

Context: 

For people to design systems, it is necessary to make some assumptions that separate the context of the design from what is being designed. There is a delicate balance. If you define the problem too broadly, you run the risk of addressing a problem that is too intractable, intellectually, logistically or financially. On the other hand, if you define the problem too narrowly, you run the risk of solving a problem that is too special, temporary, or fragile to do anyone much good.

In the honest pursuit of trying to separate out the problem from the context, it happens that one particular form of simplification is particularly popular. People assume that all the systems that will touch the one they are designing will not fail. That often includes human beings who will interact with the system. Such a design process may also presume that electrical power will never be interrupted or that internet access will be continuous.

Systems so designed may have a secondary and more insidious effect. By virtue of having been designed with no consideration to breakdowns, the system will tend to subtly influence the people and organizations that it touches not to prepare for such breakdowns either.

Problem:

When the systems that touch a given system do fail, which can always happen, if no consideration has been given to failure modes, the impact can be disastrous. Most typically, when the system has not been designed to deal with breakdowns, the personnel selection, training, and documentation also fail to deal with breakdowns. As a result, not only are the mechanisms of the systems unsuited to breakdowns; the human organization surrounding the breakdown is also unprepared. Not only is there a possibility of immediate catastrophe; the organization is unprepared to learn. As a result, mutual trust within and of the organizations around the system are also severely damaged.

architecture building fire exit ladders ladder

Photo by Photo Collections on Pexels.com

Forces:

  • Design is a difficult and complex activity and the more contingencies and factors that are taken into account, the more difficult and complex the design activity becomes.
  • Not every single possibility can be designed for.
  • People working on a design have a natural tendency to “look on the bright side” and think about the upside benefits of the system.
  • People who try to “sell” a new system stress its benefits and tend to avoid talking about its possible failures.
  • It is uncomfortable to think about possible breakdowns.
  • When anticipated breakdowns occur, the people in relevant organizations tend to think about how to fix the situation and reduce the probability or impact of breakdowns for the future.
  • When unanticipated breakdowns occur, the people in relevant organizations tend to try to find the individual or individuals responsible and blame them. This action leaves the probability and impact of future breakdowns unimproved.
  • When people within an organization are blamed for unanticipated system failure, it decreases trust of the entire organization as well as mutual trust within the organization.

* Even when consideration of support for breakdown modes is planned for, it is often planned for late in an ambitious schedule. The slightest slippage will often result in breakdowns being ignored.

Solution:

When designing a system, make sure the design process deals adequately with breakdown conditions as well as the “normal” flows of events. The organizations and systems that depend on a system also need to be designed to deal with breakdowns. For example, people should be trained to recognize and deal with breakdowns. Organizations should have a process in place (such as the After Action Review) to learn from breakdowns. Having a highly diverse design team may well improve the chances of designing for likely breakdowns. 

Resulting Context:

Generally speaking, a system designed with attention to supporting both the “normal” flow of events and likely breakdown modes will result in a more robust and resilient system. Because the system design takes these possibilities into account, it also makes it likely that documentation and training will also help people prepare for breakdowns. Furthermore, if breakdowns are anticipated, it also makes it easier for the organization to learn about how to help prevent breakdowns and to learn, over time, to improve responses to breakdowns. There is a further benefit; viz., that mutual trust and cooperation will be less damaged in a breakdown. The premise that breakdowns will happen, puts everyone more in the frame of mind to learn and improve rather than simply blame and point fingers.

fullsizeoutput_12e0

Examples: 

1. Social Networking sites were originally designed to support friends sharing news, information, pictures, and so on. “Flow” is when this is what is actually going on. Unfortunately, as we now know, social media sites can also not work as intended, not because there are “errors” in the code or UX of the social media systems but because the social and political systems that form the context for these systems have broken down. The intentional misappropriation of an application or system is just one of many types of breakdowns that can occur.

2. When I ran the AI lab at NYNEX in the 1990’s, one of the manufacturers of telephone equipment developed a system for telephone operators that was based on much more modern displays and keyboards. In order to optimize performance of the system, the manufacturer brought in representative users; in this case, telephone operators. They redesigned the workflow to reduce the number of keystrokes required to perform various common tasks. At that time, operators were measured in terms of their “Average Work Time” to handle calls.

In this particular case, the manufacturer had separated the domain into what they were designing for (namely, the human-machine interface between the telephone operator and their terminal) from the context (which included what the customer did). While this seemed seemed like a reasonable approach, it turned out when the HCI group at NYNEX studied the problem with the help of Bonnie John, the customer’s behavior was actually a primary determiner of the overall efficiency of the call. While it was true that the new process required fewer keystrokes on the part of the telephone operator, these “saved” keystrokes occurred when the customer, not the telephone operator, was on the critical path. In other words, the operator had to wait for the customer any way, so one or two fewer keystrokes did not impact the overall average work time. However, the suggested workflow involved an extra keystroke that occurred when the operator’s behavior was on the critical path. As it turned out, the “system” that needed to be redesigned was not actually the machine-user system but the machine-user-customer system. In fact, the biggest improvement in average work time came from changing the operator’s greeting from “New York Telephone. How can I help you?” to “What City Please?” The latter greeting tended to produce much more focused conversation on the part of the customer.

Just to be clear, this is an example of the broader point that some of the most crucial design decisions are not about your solution to the problem you are trying to solve but your decision about what the problem is versus what part of the situation you decide is off-limits; something to ignore rather than plan for. A very common oversight is to ignore breakdowns, but it’s not the only one.

black rotary telephone beside beige manekin

Photo by Reynaldo Brigantty on Pexels.com

3. In a retrospective analysis of the Three-Mile Island Nuclear Meltdown, many issues in bad human factors came to light. Many of them had to do with an insufficient preparation for dealing with breakdowns. I recall three instances. First, the proper functioning of many components was shown by a red indicator light being on. When one of the components failed, it was indicated by one of a whole bank of indicator lights not being on. This is not the most salient of signals! To me, it clearly indicates a design mentality steering away from thinking seriously about failure modes. This is not surprising because of the fear and controversy surrounding nuclear power. Those who operate and run such plants do not want the public, at least, to think about failure modes.

Second, there was some conceptual training for the operators about how the overall system worked. But that training was not sufficient for real time problem solving about what to do. In addition, there were manuals describing what to do. But the manuals were also not sufficiently detailed to describe precisely what to do.

Third, at one critical juncture, one of the plant operators closed a valve and “knew” that he had closed it because of the indicator light next to the valve closure switch. He then based further actions on the knowledge that the valve had been closed. Guess what? The indicator light showing “value closure” was not based on feedback from a sensor at the site of the valve. No. The indicator light next to the switch was lit by a collateral current from the switch itself.  All it really showed was that the operator had changed the switch position! Under “normal” circumstances, there is a perfect correlation between the position of the switch and the position of the valve. However, under failure mode, this was no longer true.

accident action danger emergency

Photo by Pixabay on Pexels.com

4. The US Constitution is a flexible document that takes into account a variety of failure modes. It specifies what to do, e.g., if the President dies in office and has been amended to specify what to do if the President is incapacitated. (This contingency was not really specified in the original document). The Constitution presumes a balance of power and specifies that a President may be impeached by Congress for treasonous activity. It seems the US Constitution, at least as amended, has anticipated various breakdowns and what to do about them.

There is one kind of breakdown, however, that the U.S. Constitution does not seem to have anticipated. What if society becomes so divided, and the majority of members in Congress so beholden to special interests, that they refuse to impeach a clearly treasonous President or a President clearly incapacitated or even under the obvious influence of one or more foreign powers? Unethical behavior on the part of individuals in power is a breakdown mode clearly anticipated in the Constitution. But it was not anticipated that a large number of individuals would simultaneously be unethical enough to put party over the general welfare of the nation.  Whether this is a recoverable oversight remains to be seen. If democracy survives the current crisis, the Constitution might be further amended to deal with this new breakdown mode.

5. In IT systems, the error messages that are shown to end users are most often messages that were originally designed to help developers debug the system. Despite the development of guidelines about error messages that were developed over a half century ago, these guidelines are typically not followed. From the user’s perspective, it appears as though the developers know that something “nasty” has just happened and they want to run away from it as quickly as possible before anyone can get blamed. They remind me of a puppy who just chewed up their master’s slippers and knows damned well they are in trouble. Instead of “owning up” to their misbehavior, they hide under the couch.

Despite the many decades of pointing out how useless it is to get an error message such as “Tweet not sent” or “Invalid Syntax” or “IOPS44” such messages still abound in today’s applications. Fifty years ago, when most computers had extremely limited storage, there may have been an excuse to print out succinct error messages that could be looked up in a paper manual. But today? Error messages should minimally make it clear that there is an error and how to recover from it. In most cases, something should be said as well as to why the error state occurred. For instance, instead of “Tweet not sent” a message might indicate, “Tweet not sent because an included image is no longer linkable; retry with new image or link” or “Tweet not sent because it contains a potentially dangerous link; change to allow preview” or “Tweet not sent because the system timed out; try again. If the problem persists, see FAQs on tweet time-out failures.” I haven’t tested these so I am not claiming they are the “right” messages, but they have some information.

Today’s approach to error messages also has an unintended side-effect. Most computer system providers now presume that most errors will be debugged and explained on the web by someone else. This saves money for the vendor, of course. It also gives a huge advantage to very large companies. You are likely to find what an error message means and how to fix the underlying issue on the web, but only if it is a system that already has a huge number of users. Leaving error message clarification to the general public advantages the very companies who have the resources to provide good error messages themselves and keeps entrenched vendors entrenched.

slippery foot dangerous fall

Photo by Pixabay on Pexels.com

References: 

Alexander, C., Ishikawa, S., Silverstein, M., Jacobsen, M., Fiksdahl-King, I. and Angel, S. (1977), A Pattern Language: Towns, Buildings, Construction. New York: Oxford University Press.

Beyer, Hugh and Holtzblatt, Karen (1998): Contextual design: defining customer-centered systems. San Francisco: Elsevier.

Carroll, J., Thomas, J.C. and Malhotra, A. (1980). Presentation and representation in design problem solving. British Journal of Psychology/,71 (1), pp. 143-155.

Carroll, J., Thomas, J.C. and Malhotra, A. (1979). A clinical-experimental analysis of design problem solving. Design Studies, 1 (2), pp. 84-92.

Carroll, J. and Kellogg, W. (1989), Artifact as Theory-Nexus: Hermeneutics Meets System Design. Proceedings of the ACM Conference on Human Factors in Computing Systems. New York: ACM, 1989.

Casey, S.M. (1998), Set Phasers on Stun: And Other True Tales of Design, Technology, and Human Error. Santa Barbara, CA: Aegean Publishing.

Gray, W. D., John, B. E., & Atwood, M. E. (1993). Project Ernestine: Validating GOMS for predicting and explaining real-world task performance. Human Computer Interaction, 8(3), 237-309.

Harris, J. & Henderson, A. (1999), A Better Mythology for System Design. Proceedings of ACM’s Conference on Human Factors in Computing Systems. New York: ACM.

Malhotra, A., Thomas, J.C. and Miller, L. (1980). Cognitive processes in design. International Journal of Man-Machine Studies, 12, pp. 119-140.

Thomas, J. (2016). Turing’s Nightmares: Scenarios and Speculations about “The Singularity.” CreateSpace/Amazon.

Thomas, J.C. (1978). A design-interpretation analysis of natural English. International Journal of Man-Machine Studies, 10, pp. 651-668.

Thomas, J.C. and Carroll, J. (1978). The psychological study of design. Design Studies, 1 (1), pp. 5-11.

Thomas, J.C. and Kellogg, W.A. (1989). Minimizing ecological gaps in interface design, IEEE Software, January 1989.

Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.


Author Page on Amazon

Introducing Peter S Ironwood

24 Sunday Nov 2013

Posted by petersironwood in Uncategorized

≈ Leave a comment

Tags

Aunt Rennie, customer service, human factors, user experience

I’m not the kind of guy who likes to talk much about me.  That’s not the point.  It doesn’t matter much that when I was five, both my parents abandoned me or why.   Doesn’t matter I spent most of my childhood in the San Diego area being gawked at for my tall skinny frame and unruly blond hair and steel grey eyes. What does matter is that know I am on the lookout.  For what? For things that are WRONG in this world.  We need to get another thing straight. I do this for ME, not for you, though you can certainly benefit. And, when I say “wrong” I don’t mean things that are evil, though God knows there is plenty of that too.  No, what I am talking about is plain simple stupidity.  People make a product or sell you some so-called “service” and it sucks.  And why does it suck?  Because they are not satisfied to make a billion dollars by selling a shipload of vacuum cleaners or cameras or software systems that actually work.  No.  Instead, they want to make 1,000,020,000 bucks by not spending 20K to bring me in and see whether their blasted microwave or or digital watch or whatever actually works for human beings.  And, do you want to know what’s *really* frigging stupid?  They don’t make their stupid billion dollars anyway?  Why?  Because they end up spending millions of dollars on help lines and millions more on TV ads that show some sexy, tight-skirted, plump-lipped open mouth girl seeming to have a big O just from using their vacuum cleaner.  At least those ads I like even though it isn’t going to get me to buy their vacuum cleaner.  But what is with these ads showing people being completely idiotic and pointless.  If the girl air-brushed into that desperate anorexic twitch isn’t going to make me buy their machine, why is some bumble-headed fat guy walking into a wall going to do the trick?  

Case in point: Telephone menu systems that have no obvious option for talking with a human being.  Have you ever run into one of those?  “Please listen to the following menu items and choose the one that describes your car.  Press 1 for black car.  Press 2 for a convertible.  Press 3 for a hummer.  Press four to hear these options again.” WHAT???!!   I own a white BMW sedan.  The Greeks had an interesting word: “hubris” to describe self-defeating pride.  First of all, nobody thinks of all the possible things you might want from this “service” number ahead of time.  Nobody.  So, there should ALWAYS be a choice for talking with an operator.  Do you really need a Ph.D. in psychology from Stanford to know this?  Wouldn’t just living on the planet for six or seven years do the trick?  More later.  I have to go try to glue the fragments of my phone back together before Aunt Rennie gets here.  She gets freaked out by my temper.

Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 661 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...