• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: UX

3. Boundaries

02 Monday Aug 2021

Posted by petersironwood in Uncategorized

≈ 1 Comment

Tags

boundary, Design, ethics, limit, privacy, truth, UX

3. Boundaries

Here are a few thoughts about “Boundaries” and how they apply in User Experience. 

I decided to gift a copy of of Volume One on The Nature of Order to my daughter earlier today. I logged on to Amazon and looked at my address book. I am aware that she moved fairly recently. So, I was scrolling through my earlier text conversations with her to see whether she had told me of her new address. I couldn’t find a text about her new address so I texted her to get the new address. 

Suddenly, a popup window appeared from SIRI. It had her new address. I hadn’t said anything aloud. I thought of SIRI as a voice-activated service on my iPhone. It was disconcerting to have it “notice” my text message and then suggest an answer (which turned out to be correct). 

Last week, after physical therapy, my therapist & I began to discuss the time for my next appointment. I pulled up the calendar application on my iPhone and went to a particular day and began to type in her name. After the first two characters were typed in, the “type-ahead” function suggested three possible “completions” the first of which was the time we had been orally discussing (which was not a common time nor the time of any of my recent appointments with her). It also filled in her complete name and the purpose of the appointment, but that was more understandable. 

Photo by Nikolay Ivanov on Pexels.com

One sense of “Boundaries” in User Experience connects with a notion of “boundaries” that is much discussed in contemporary mental health. We are advised to “establish boundaries” with co-workers, family, friends, and strangers. We don’t necessarily want to share personal information with everyone or let everyone touch us in any way they choose to. If intimate details are shared in a recovery group or group therapy, it is generally agreed that such details will not be shared with others. 

We sometimes extend the idea of informational boundaries to written materials as well. If, for instance, we keep a personal diary, we do not expect other people to “search for it” or to read it. In this story, I relied on the expectation that someone would read a paper I “accidentally” dropped on the sidewalk. But she was so protective of my privacy that she wouldn’t even glance at my paper. 

On the other hand, if we write and publish an autobiography, then we can expect that other people will feel justified in discussing the contents. To me, it would seem odd for an author to feel “violated” if people start talking about the contents of their autobiography (or their blog).

When it comes to modern interactions with computer software however, the boundaries are invisible — and sometimes non-existent. It can feel as though I write a private diary on paper; lock it up in a safe immediately; and then — without any sign that the safe has been broken into, I suddenly find details of my personal life revealed! 

There appear to be boundaries between applications, and certainly between devices but these boundaries may be illusory. I find that troubling and confusing. I think the first application of “Boundaries” as a property of UX is that apparent boundaries should be real. There may be exceptions for exceptional circumstances; e.g., the police may get a search warrant to search your house if there is reasonable suspicion that you have committed a crime. 

When a social media site analyzes your reactions, relationships, and word usage to determine what to try to sell you and what type of approach is most likely to succeed, that does not strike me as a reasonable response to an “emergency.” As most readers know by now, such information is not only used to try to sell you more stuff; it was also used to manipulate public opinion; for example, to convince some US voters to stay away from the polls on election day in 2016; to convince voters in the UK to vote for Brexit; to convince people not to get vaccinated. 

Living things do have boundaries. Breaching those boundaries is typically something to be avoided. We call such breaches by names like “bites”, “wounds”, “diseases”, “gunshots”, “parasites.” Living cells typically have a cell wall. Within the cell are tinier organelles such as the mitochondria. The mitochondria have boundaries. The nucleus of a cell has a boundary. Within the nucleus, the nucleolus has a boundary. Larger structures often have boundaries. Motor neurons have a myelin sheath which allows neural impulses to travel faster. Almost our entire body is covered in layers of skin which for a boundary.

 

The formation of boundaries does not stop with our physical body. Organizations of humans — nuclear families, clans, nation-states, counties, cities, townships, teams, corporations — they have boundaries. A bank, for instance, might have a safe for the money, but the building itself also functions as a boundary — not an impermeable boundary — customers are allowed to come in during banking hours. There are also legal boundaries. If you have “an account” at a bank, you will be allowed to do things that non-customers cannot. Similarly, if you are an employee of a company or a member of a sports team, you will be allowed to do things and go places that you couldn’t if you were outside that boundary. 

All boundaries are semi-permeable. Boundaries change over time. A thorn tears your skin. Your boundary is broken. If you’re not careful, bacteria can get in and cause an infection. Your white blood cells destroy the invading bacteria. Your body heals. If the cut was bad enough, you may get a scar and the scar is now part of your “boundary.” It isn’t only at the level of the body that changes occur. Your social boundaries change too.

You get married. You get divorced. You are born. You join a team. You quit the team. You sell your house and other people buy it. Now, you are no longer allowed to come into the house without an invitation. Meanwhile, you buy another house. You have acquired new boundaries. Or, perhaps, you have no home. You are homeless and your boundaries are not so secure. 

Most of our possessions have clearly defined boundaries. Your hammer is separate from your saw which is separate from your drill. They come from an earlier time and the “boundary” of such objects are determined by their shape. More recently, such tools (and nearly everything else!) Is packaged in bubble wrap which forms an additional boundary. This makes it harder for people to hide one under their clothes and walk out without paying. Such packaging has the added advantage that it will require time and energy on your part before you can actually start using the tool for its intended purpose. Not only that — such packaging helps pollute our world beyond the pollution required by “old style” tools.

Photo by cottonbro on Pexels.com

 

Once you have separated your new tool purchase from its packaging, if you have any energy left, you can saw a board, or drill a hole, or hammer a nail. But you do not expect (not yet, at least), the saw to “communicate” behind the scenes with your drill. Or with you. You’d be surprised if it piped up and said, “Gee, Gene, you just sawed a board. Now, you have taken up the drill. Would you like suggestions on how to build a dog house?” (That’s what Clippy would do). 

(Link to Wikipedia article on “Clippy” and how it was parodied). https://en.wikipedia.org/wiki/Office_Assistant

Clippy tried to be helpful. But it didn’t really have enough information about my tasks, goals, and context to actually be helpful. But today’s behind-the-scenes information sharing with dark forces is not trying to be helpful. It’s trying to get you to change your behavior for someone else’s benefit — and you don’t even know who those someones are. 

Notice that if you buy a house (which typically comes with doors and keys), you can lock the house and the default is that it keeps everyone else out — except those you’ve given a key to or those who have rung your doorbell or otherwise asked for permission to come in. Typically, if a rare visitor comes to your house, you make arrangements for a time and a place. The piano tuner comes to tune your piano. You might let them use your bathroom or even offer them something to drink. But you don’t expect the piano tuner to redecorate your study or to spend the night uninvited. 

Photo by Mike on Pexels.com

That’s kind of what does happen in the electronic world though. In many cases, you cannot visit a website or use an application unless you give permission for the “guest” to rifle through the choices you make. Just to be clear, these “choices” are not only explicit choices; your “choices” can include how long you linger over a particular message or video clip. In many cases, you have not just given a key to a specific vendor, application, or website — in many cases, you have also given them rights, essentially, to make as many copies of your front door key as they care to make and hand them out to whomever they like. 

These are missing boundaries, not so much in the user interface design, but in the socio-technical context in which we use our technology. 

In the physical world in which we evolved, invasion of privacy typically involved symmetry. If I can see your eyes, you can see mine. Conversely, if I can’t see anyone, chances are that they can’t see me. Of course, this isn’t literally true. A tiger’s camouflaging stripes may mean that they can see the gazelles even though the gazelles cannot see them. The astounding eyesight of the eagle allows them to see a mouse on the ground and start their deadly dive before the mouse can see the eagle. 

In the electronic world, it isn’t genetically coded asymmetries of information that allows other people to invade your boundaries — in many cases without your permission or even knowledge. It is an asymmetry that comes from money and time. You don’t have anything like the fortune that rich companies have. They can hire experts at subverting your boundaries. They can hire an entirely different set of experts to convince you that it’s all okay. They can afford to hire still other experts to defend themselves in a court of law should you seek redress for any particularly unethical behavior. They can afford to hire politicians as well in order to make laws to protect their unfettered access to your data. You typically cannot afford to hire politicians to protect your right to privacy. 

You probably don’t have 10,000 to 100,000 people working for you. Companies not only have the money to spy on you. They also have to time to collect and analyze your behavior & make sense of it. You don’t. Perhaps, every once in a while, you take the time to wade through a “privacy policy.” In most cases, since experts were hired to make the text as incomprehensible as possible, you likely didn’t see much value in reading the document. 

The Nature of Order is about aesthetics, not ethics. And, this post was meant to be about aesthetics, not ethics. 

The poem by Keats, Ode on a Grecian Urn, 

https://www.poetryfoundation.org/poems/44477/ode-on-a-grecian-urn

ends thusly: 

“Beauty is truth, truth beauty,—that is all

                Ye know on earth, and all ye need to know.”

Life includes differences in sensory capability. And life includes camouflage. Generally, however, when you get to the end a cliff and step off, you have a pretty good idea what’s going to happen. The boundary is visible to you, to a bison, to a mouse, to a lemming, to an eagle. 

Photo by Pixabay on Pexels.com

When we walk through the woods in northeastern USA where I lived for many years, we run the risk of being attacked by a deer tick. The dear tick makes a hole in you and starts sucking your blood (oh, while they’re at it, they may inject a large does of Lyme disease bacteria into your blood stream). You don’t notice it, because the deer tick is “kind enough” to administer a local anesthetic so you don’t feel any pain from this invasion of your person; this breaking of boundaries. It’s a one-sided breach. The deer tick is well aware of the invasion. It’s the whole point! But you do not perceive the breach. At least, I didn’t. Twice. Thankfully, I don’t seem to have any long-lasting effects though I have several friends who do. 

A one-sided boundary breach, doesn’t seem “aesthetic” to me. Nor does it seem “truthful.” The little orange deer tick, is, in a very real sense, lying to me. It uses its narcotic to tell me, “No worries! There’s no wound here! There’s no deer tick sucking your blood. There’s no deer tick injecting a serious disease into your blood. No, no. All is well!” It seems the opposite of beauty and the opposite of truth.

I suppose if I had been born a deer tick, I might view things differently. 

———————-

Myths of the Veritas: The Orange Man

Pattern Language for Collaboration and Teamwork

Strong Centers

29 Thursday Jul 2021

Posted by petersironwood in Uncategorized

≈ 2 Comments

Tags

beauty, Christopher Alexander, Design, HCI, UX

This is the second posting in a series of fifteen which examine Christopher Alexander’s “Fifteen Properties” of natural beauty and suggests how these properties might apply to user experience design. 

Photo by Dominika Greguu0161ovu00e1 on Pexels.com

2. “Strong centers” is probably one of the most overlooked properties of design in UX/HCI. Often, what exists for the user, from their perspective, is a “sprawl” of functions, tool bars, and icons with no obvious overall or subsidiary organization. A better design would allow the user to quickly find a “home base” from which, it would be obvious where to find subsidiary home bases. There is some sense in which hyperbolic trees, fisheye lenses, and home pages partly begin to address this issue. 

Instead of “strong centers”, the impression I often get in looking at applications for word processing, organizing photos, searching, or dealing with settings is that the designers are given or generate a long list of functions to be supported. Which ones are related to which though? Which ones are central? In many cases, UX practitioners give users (or, more often, potential users), a set of cards with one function each and ask the users to sort these into piles. I am not against such studies, but they are unlikely to lead to a coherent design with a strong center. The users are not, in most cases, professional designers. In many cases, an application is supposed to support many different specific actions. For example, I use word processors to write essays, poems, and fiction. I also use a word processor to proofread something, to re-organize ideas, to “jot down” a bunch of ideas, or to write an outline. These are very different tasks, at least to me. If asked sort cards, I would do it differently depending on which type of task and which type of material I’m thinking about. 

As I type this, I glance at the “Pages” tool bar which includes: Pages, File, Edit, Insert, Format, Arrange, View, Share, Window, and Help. None of these seems like “home” to the task of writing an essay. I know from experience that if I want to write any kind of material, I must go to the “File” menu even though, as best I can recall, my initial impression of this label was that it would be something to do after I was “done” with the tasks of composing and proofreading. The toolbar gives no impression of their being any “center” at all, let alone a “strong center.” 

In my native language (English), I read from top to bottom, left to right. In that sense, the Apple Icon is first and the “Pages” item is next and it is in bold print. That could be considered a subtle clue that it’s the “most important.” In a way, the items on the “Pages” menu are “meta-items.” In that sense, I suppose you could argue that they are “important” — though as a writer, none of the items seem that important. In fact, if we get right down to it, nothing in Pages really seems designed to support the actual writing process. And, I’m not trying to single out Pages because it’s the only one or the worst one. Lack of a “strong center” seems true of nearly all applications. 

Different people use different processes for writing — and I myself use different processes for different types of writing, so perhaps trying to organize the features and functions so that there is a “strong center”reflective of the “strong center” of the task of writing is just not feasible. I am certainly not advocating for resurrecting “Clippy.” 

Ideally, it should be possible for users to “know where the action is” upon entering an application or a web page.

In another interpretation, “strong centers” refers more to underlying architecture and points to the need for a core of functionality that transcends a specific release or even a specific application. A good underlying architecture will communicate this essential center (related to central purpose or style) to the user.

All too often, the processes involved in developing an application or system themselves have no “strong center.” If the development process is itself a hodgepodge political process of accommodating to a portfolio of features and functions that are advocated for by diverse and uncoordinated stakeholders, then, what one has are a long, unorganized list artificially shoved into menus and sub-menus and toolbars. It should not be surprising then, that the user finds it difficult to know where to begin when first encountering an application — even if the user knows exactly what they want to accomplish. 

Compare and contrast most menu structures and user interfaces with the “strong centers” that are extremely common in life forms. Here are some examples of butterflies. 

Photo by Cu00e1tia Matos on Pexels.com

The central axis includes the head, the thorax and the abdomen. These are in a line in the strong center. Typically, they are colored differently from the wings. The bilateral symmetry of the wings as well as the overall shape reinforces the strong center. Wing patterns, and even the antennae and legs lead the eye back to the strong center. 

Most people think of butterflies as beautiful — and I agree. When asked, however, most people will say they are “brightly colored.” Some are; some are not. But “bright colors” don’t necessarily give rise to beauty!

Strong Centers doesn’t just apply to butterflies. Look at most birds, fish, mammals, insects and you will see how the symmetries and smaller centers reflect and strengthen the major center. Indeed, Strong Centers are not limited to the animal kingdom. The trunk of a tree provides a strong center. Each branch is itself a center and the connection of the branches to the trunk reinforces the strength of the central trunk. 

Photo by Snapwire on Pexels.com

Even single cells often exhibit strong centers (and many of the other 15 properties, by the way). 

It would love to be able to provide you with a process or checklist or formula so you could design user experiences with “strong centers.” I cannot really do that. Nor can anyone else. If you keep it in mind, even at the back of your mind, you may see opportunities to help make that happen with regard to whatever you’re working on. I’m curious to hear your thoughts about “Strong Centers.” 

———————

Some useful links to more information, discussion, and examples relevant to “strong centers” or to the fifteen properties.

Christopher Alexander – Fundamental Property 2: Strong Centers

https://www.archdaily.com/626429/unified-architectural-theory-chapter-11

Fifteen Properties of Natural Beauty & UX/HCI

22 Thursday Jul 2021

Posted by petersironwood in Uncategorized

≈ 13 Comments

Tags

Design, HCI, human factors, UX

Photo by Dominika Greguu0161ovu00e1 on Pexels.com

This is an introduction to a series of blog posts on the “Nature of Beauty.” 

Christopher Alexander was an architect and city planner. In his MIT dissertation, Alexander took a very mathematical approach to design. In our studies at IBM Research on the “Psychology of Design” I first ran across that work (Notes on the Synthesis of Form). Later in his career, however, he took a quite different approach to design. With an international team, he visited many different parts of the world to see what “worked” in terms of architecture and city planning. The results were documented in the form of a “Pattern Language.” In this sense, a “Pattern” is the named solution to a recurring problem. A “Pattern Language” is a connected lattice of Patterns that together, “cover” a field. 

Others (including me) have emulated his approach for other fields such as pedagogy, organizational change, object-oriented programming, software development processes, and human-computer interaction. A few years ago, I suggested such Patterns for collaboration and teamwork. Here’s a link to the introduction of that effort. Here’s a link to the index of those Patterns. 

Still later in life, Christopher Alexander embarked on a project called The Nature of Order. This work is documented in a series of four books. In the first book, he proposes fifteen properties of good form in nature — and in beautiful artifacts. Explication and example will be needed to appreciate what these properties mean. In this post, I list the fifteen and will attempt to explain the first one with respect to UX and Human Computer Interaction.

OLYMPUS DIGITAL CAMERA

First, you might be wondering what relevance these fifteen properties of nature might have to interface design. After all, can’t the designer just test out their ideas empirically until a UX design is shown to be usable, learnable, and perhaps enjoyable as well? Well, sure. Ideally, every possibility could be explored and tested empirically. 

But how many possibilities are there? Without any guiding principles, there are not only more possibilities than can be tested by you. There are more possibilities than there are atoms in the universe. Imagine a very simple interface on a small mobile device. Let’s say there are only ten screens in your whole application. The iPhone 12, for instance, has nearly 3 million pixels. 24-bit color allows over 16 million colors per pixel. If you literally tested out every possible arrangement, this would mean 16 million to the 3 millionth power for each of the ten screens!! The number of atoms in the universe is estimated to be between 10**78 and 10**82. Obviously, this is far less than 16,000,000**3,000,000 !! 

Of course, I’m not suggesting that anyone would attempt a pixel by pixel test of an interface, but the general point remains: you need some way to limit testing to reasonable alternatives. The notion of using these fifteen properties is not that they dictate a particular design nor that you don’t need to do any empirical testing. Rather, the fifteen properties could be used to help guide design. The properties could be thought of as reducing the search space. 

Here are the fifteen properties: 

  1. Levels of Scale
  2. Strong Centers
  3. Boundaries
  4. Alternating Repetition
  5. Positive Space
  6. Good Shape
  7. Local Symmetries
  8. Deep Interlock and Ambiguity
  9. Contrast
  10. Gradients
  11. Roughness
  12. Echoes
  13. The Void
  14. Simplicity and Inner Calm
  15. Not-separateness

Perhaps the names themselves might resonate with your own sense of aesthetics for design and composition, but let’s review them one by one. 

The first is “Levels of Scale.” 

Photo by Roney John on Pexels.com

When it comes to natural beauty, a few moments reflection may provide you with many examples. Christopher Alexander claims this property is also present in traditional art and architecture across many cultures. As you see something such as, e.g., the Taj Mahal or the Parthenon in the distance, you see a beautiful shape. As you approach it, you will see more and more levels of scale. By contrast, many modern buildings are largely featureless between the overall shape and the texture of the building material. 

If your design has multiple levels of scale, it will be easier for your user to orient themselves; to know “where they are” in the application and therefore easier to take appropriate action. It’s something to keep in mind with respect to your design — whether hardware, software, documentation, or a building. 

How could you use or see “Levels of Scale” as a desirable property of what you are doing right now? 

———————————-

Some references to Pattern Languages in HCI. 

Pan, Y., Roedl, D., Blevis, E. and Thomas, J. (2012), Re-conceptualizing Fashion in Sustainable HCI. Designing Interactive Systems conference.  New Castle, UK, June 2012.

Thomas, J. C. (2012).   Patterns for emergent global intelligence.   In Creativity and Rationale: Enhancing Human Experience By Design J. Carroll (Ed.), New York: Springer.

Thomas, J. (2012). Edging Toward Sustainability. CHI Workshop Position Paper for Simple Sustainable Living. CHI 2012, Austin, Texas.

Thomas, J. (2012), Enhancing Collective Intelligence by Enhancing Social Roles and Diversity. CSCW Workshop Position Paper for Collective Intelligence and Community Discourse and Action. CSCW 2012, Bellvue, WA.

Thomas, J. (2011), Toward a pattern language for socializing technology for seniors. Workshop position paper accepted for CSCW 2011 workshop: Socializing technology among seniors in China, Hangzhou, China, March 19-23.

Thomas, J. (2011). Toward a Socio-Technical Pattern Language for Social Systems in China and the World. Workshop position paper accepted for CSCW 2011 workshop: Designing social and collaborative systems for China. Hangzhou, China, March 19-23.

Thomas, J. (2011). Toward a Socio-Technical Pattern Language for Social Media and International Development. Workshop position paper accepted for CSCW 2011 workshop: Social media for development, Hangzhou, China, March 19-23.

Bonanni, L., Busse, D. Thomas, J., Bevis, E., Turpeinen, M. & Jardin, N. (2011). Visible, actionalble, sustainable: Sustainable interactin design in professional domains.  Workshop accepted for CHI 2011. Vancouver, B.C., May 7-12.

Thomas, J. (2011). Focus on Ego as Universe and Everyday Sustainability. Workshop position paper accepted for CHI 2011 workshop: Everyday practice and sustainable HCI: Understanding and learning from cultures of (un)sustainability.  Vancouver, B.C., May 7-12.

https://www.researchgate.net/publication/2242380_A_Pattern_Approach_to_Interaction_Design

Click to access dearden-patterns-hci09.pdf

https://www.mit.edu/~jtidwell/common_ground.html

Thomas, J. C. (2018), Building common ground in a wildly webbed world: a pattern language approach. Journal of Information, Communication and Ethics in Society, 16 (3), 338-350.

——————————-

“The Psychology of Design”

08 Thursday Jul 2021

Posted by petersironwood in Uncategorized

≈ Leave a comment

Tags

creativity, Design, HCI, human factors, IBM, leadership, research, UX

“The Psychology of Design” 

I worked at IBM, all told, about 28 years. During that time, management put more and more pressure on us to make our work “relevant” to the business. In fact, the pressure was always there, even from the beginning. Over the years, however, we were “encouraged” to shorten evermore the time gap between doing the research and having the results of that research impact the bottom line. This was not an IBM-only phenomenon. 

I was a researcher, not a politician, but it seemed to me that at the same time researchers in industrial labs were put under pressure to produce results that could be seen in terms of share price (and therefore payouts to executives in terms of stock options), academia was also experiencing more and more pressure to publish more studies more quickly — and to make sure “intellectual property” was protected to make sure the university could monetize your work. This was about the same time that, at least in America, increasing productivity and the wealth that sprung from that increased productivity stopped being shared with the workers.

Photo by Dmitry Demidov on Pexels.com

In the late 1970’s, the “Behavioral Sciences” group began to study the “psychology of design.” For the first few months, this was an extremely pleasurable & productive group, due mainly to  my colleagues. Over the next few blogs, I’ll focus on some specific techniques and methods that you may find useful in your own work. 

In this short story though, I want to focus instead on some broader issues relevant to “technology transfer”, “leadership” and “management.” Even if you are or aspire to be an expert in UX or HCI or design, I assure you that these broader issues will impact you, your work and your career. I wouldn’t suggest becoming obsessed with them, but being aware of their potential impact could help you in your own work and career. 

It is telling that, almost invariably, whenever I told someone inside IBM (or, for that matter, outside IBM) that I was studying the “psychology of design,” people responded by asking, “the design of what?” So, I would explain that we were interested in the generic processes of design and how to improve them. I would explain that we were interested in understanding, predicting, and controlling these processes to enable them to be more effective. I would explain that we could apply these findings to any kind of design: software design, hardware design, organizational design, and (see last post about IBM) communication design. I would explain that design was a quintessentially human activity. I would also explain that design was an incredibly leveraged activity to improve. 

Looking back on it, I still think all these things are true. I also see that I missed the “signal” people were giving me that, while I thought of design as something that could be studied as a process, that most people did not think of it that way. To them, it was never the “psychology of design,” but only the design of something. 

Don’t get me wrong. I agree that somewhat different skills are involved in designing a great advertising campaign, a great building, and a great application. I agree that different communities of practice treat various common issues differently. I still think it’s worth studying commonalities across domains. For one thing, we may find an excellent way of generating ideas, say, that the advertising community of practice uses that neither architects nor applications developers had ever tried. Or, vice versa.

My own academic background was in “Experimental Psychology.” We were forever doing experiments that we believed were about psychological processes that were thought to be invariant regardless of the domain. It was an axiom of our whole enterprise that studying memory for any one thing shed light on how we remember every other thing. Similar studies looked at decision making or problem solving or multi-tasking. We came to understand that there were some interesting exceptions to being able to separate content from process. For instance, it is much easier to multi-task a spatial task and a verbal task than it is to multi-task two independent spatial tasks or two independent verbal tasks. 

We used a spectrum of techniques to study “design” from laboratory studies of toy problems, to observing people doing real-world design problems while thinking aloud. After about 3-4 months of very productive work, we were told that we had to make our work relevant to software development. That should be the focus of our work. We were told that this command came from higher-ups in IBM. That might have been true, or perhaps partly true. 

It might also be relevant that someone in our management chain might have been the recipient of a grant from ONR which was specifically focused on software development. So far as I can tell, nothing had been done on that grant. So, our past, present, and future work could have been co-opted to be “results” done under the auspices of the ONR grant. 

In any case, regardless of the “reasons,” the group began to focus specifically on software design. In one study, we used IBM software experts as subjects. Each person was given information that was geared toward a specific transformation that occurred in software development. One person was presented with the description of a “situation” that included a number of “issues” and they were asked to write a requirements document. In real life, I would hope that this would be done in a dialogue (and, indeed, in other studies, we recorded such dialogues). Absent such dialogues, what we found was that different software experts — all from IBM research — and all given the same documentation about a set of problems generated vastly different problem statements and overall approaches. 

Photo by ELEVATE on Pexels.com

In other parts of the study, other experts were variously given requirements documents and asked to do an overall, high level system design, or given a high level design and asked to design an algorithm, or given an algorithm and asked to code a section. There was always diversity but the initial showed the greatest diversity. The initial stage is also the one that can cause the most expensive errors. If you begin with a faulty set of requirements — a misreading about how to even go about the problem — then, the overall project is almost certain to incur schedule slip, cost overruns, or outright failure. 

While the vital importance of the initial stages of design is true in software development, I would argue that it is likely also true for advertising campaigns, building designs — and even true for the design of research programs. We designed our research agenda under the assumption that we had a long time; that we were studying design processes independently of specific communities of practice or the nature of the problems people were attempting to address. We assumed that there was no “hidden agenda.” Although we believed we would eventually need to show some relevance to IBM business, we had no idea, when we began, that only relevance to software design would be “counted.”



—————-

Some of our studies on the “Psychology of Design.” 

Carroll, J. and Thomas, J.C. (1982). Metaphor and the cognitive representation of computer systems. IEEE Transactions on Man, Systems, and Cybernetics., SMC-12 (2), pp. 107-116.

Thomas, J.C. and Carroll, J. (1981). Human factors in communication. IBM Systems Journal, 20 (2), pp. 237-263.

Thomas, J.C. (1980). The computer as an active communication medium. Invited paper, Association for Computational Linguistics, Philadelphia, June 1980. Proceedings of the 18th Annual Meeting of the Association for Computational Linguistics., pp. 83-86.

Malhotra, A., Thomas, J.C. and Miller, L. (1980). Cognitive processes in design. International Journal of Man-Machine Studies, 12, pp. 119-140.

Carroll, J., Thomas, J.C. and Malhotra, A. (1980). Presentation and representation in design problem solving. British Journal of Psychology/,71 (1), pp. 143-155.

Carroll, J., Thomas, J.C. and Malhotra, A. (1979). A clinical-experimental analysis of design problem solving. Design Studies, 1 (2), pp. 84-92.

Thomas, J.C. (1978). A design-interpretation analysis of natural English. International Journal of Man-Machine Studies, 10, pp. 651-668.

Thomas, J.C. and Carroll, J. (1978). The psychological study of design. Design Studies, 1 (1), pp. 5-11.

Miller, L.A. and Thomas, J.C. (1977). Behavioral issues in the use of interactive systems: Part I. General issues. International Journal of Man-Machine Studies, 9 (5), pp. 509-536.

——————————

Blog posts about the importance of solving the “right” problem. 

The Doorbell’s Ringing. Can you get it?

https://petersironwood.com/2021/01/13/reframing-the-problem-paperwork-working-paper/

Problem Framing. Good Point. 

https://petersironwood.com/2021/01/16/i-say-hello-you-say-what-city-please/

Problem formulation: Who knows what. 

How to frame your own hamster wheel.

The slow seeming snapping turtle. 

Author Page on Amazon. 

Walston & Felix Multiple Regression Study

24 Thursday Jun 2021

Posted by petersironwood in Uncategorized

≈ 1 Comment

Tags

HCI, productivity, programming, regression, software, UX

As I mentioned recently, when I first arrived at IBM Research in the early 1970’s I began to work on Query By Example and other schemes to make it easier for non-programmers to interact productively but flexibly with computers. Some of the lessons learned have to do, not with my own work, but with the work of others. 

At that time, some researchers labelled their work as “The Psychology of Programming.” There were many debates — and some studies — about structures, syntax, which language was better than others, etc. There were also lengthy discussions about the process that one should use for software development. Doing any kind of “controlled” experiment on large scale code development is prohibitively expensive. It is rare that a company is willing to have two independent teams build the same piece of software in order to learn which of two methods is “better.” 

Of course, one such comparison would likely not prove much. It may be that one of the two teams had a “super-programmer” or an extremely good manager. Perhaps, flu broke out in one of the two teams. You would really need to study many more than two teams to properly and empirically study the impact of language, or syntax, or process. Doing reasonable-sized experiments would be far too costly and impractical. Our lab and others did do laboratory tasks in order to test one syntax variant against another and so on. The problems were generally quite small in order for the study to be practical. So, the applicability to real-world development projects was questionable.

Photo by Christina Morillo on Pexels.com

Walston & Felix (See below), on the other hand, were able to find data on a fair number of real-world projects and rather than try to control for languages, processes, etc., they did a multiple regression analysis based on what languages, methods, etc. real projects used and what the important predictors were of actual productivity. 

Personally, I learned two lessons from their study. 

Lessons Learned: #1 — Sometimes, when it comes to what matters in the real world, controlled laboratory experiments have to give way to other methods such as studying “natural experiments.” Despite the many issues with trying to interpret such findings, no-one will pay for massive controlled experiments that parametrically vary programming methods, programming languages, etc. while controlling for quality of management, experience, complexity of task, etc. Multiple regression studies and in-depth case studies; ethnographic studies; interviews: all of these can provide useful input. 

OLYMPUS DIGITAL CAMERA

Lessons Learned: #2 — The impact of the variables that our community of people were looking at in terms of syntax, structure, etc. were dwarfed by the impact of organizational variables. For example, Walston & Felix found that the complexity of the interface between the developers and the customers was extremely important.

DeMarco & Lister (See below) claim that, based on their decades of experience as consultants to the software development process, projects almost never fail for technical reasons; when they do fail, it’s almost always for organizational and management reasons.  

That conclusion dovetails with my experience. Many decades later, working for IBM in “knowledge management,” it was amazing how many companies wanted us to “solve” their knowledge management issues by building them a “system” for knowledge sharing.

But…

Management at the company would not provide:

Incentives to share knowledge

Space to share knowledge

Time to share knowledge 

Or, commit any personnel to gathering, vetting, organizing, and promoting the knowledge repository. 

So — knowledge sharing was something they were simply supposed to do on top of everything else they were doing. 

They did not want a computer system, IMHO; they wanted a magic system. 

Photo by David Cassolato on Pexels.com

These experiences were part of my motivation for attempting to catalog “best practices” in collaboration and teamwork in the form of a Pattern Language. Christopher Alexander and his colleagues looked at “what worked” in various parts of the world when it came to architecture and city planning. What they did for architecture and city planning, I want to do for collaboration. 

Naturally, merely creating a catalog is not sufficient. I need to have people who will read it, understand it, modify and improve it, and then promulgate it via actual use. For now, it’s free. Comments and critiques are always welcome.

—————-

C. E. Walston and C. P. Felix, “A method of Programming Measurement and Estimation,” IBM Systems Journal, vol. 16, no. 1, pp. 54–73, 1977.

https://en.wikipedia.org/wiki/Peopleware:_Productive_Projects_and_Teams

Thomas, J.(2008).  Fun at work: Managing HCI from a Peopleware perspective. HCI Remixed. D. McDonald & T. Erickson (Eds.), Cambridge, MA: MIT Press.

——————-

Introduction to a Pattern Language for Collaboration and Teamwork 

Index to a Pattern Language for Collaboration and Teamwork

Chain Saws Make the Best Hair Clippers 

Design – Interpretation Model of Communication

22 Tuesday Jun 2021

Posted by petersironwood in Uncategorized

≈ 2 Comments

Tags

communication, deception, experiment, HCI, IBM, media, psychology, truth, UX

In my early days at IBM Research (1970’s), we were focused on trying to develop, test, or conceive of ways that a larger proportion of people would be able to use computers. One of the major ways of thinking about this was to use natural language communication as a model. After all, it was reasoned, people were able to communicate with each other using natural language. This meant that it was possible, at least in principle. Moreover, most people had considerable practice communicating using natural language. 

One popular way of looking at natural language (especially among engineers & computer scientists) was essentially an “Encoding – Decoding” model. I have something in my head that I wish to communicate to you. So, I “encode” my mental model, procedure, fact, etc. into language. I transmit that language to you. Then, you “decode” what I said into your internal language and — voila! — if all goes well, you construct something in your head that is much like what is in my head. Problem solved. 

Photo by LJ on Pexels.com

Of course, people who wrote about communication from this standpoint acknowledged that it didn’t always work. For instance, as speaker, I might do a bad job of “encoding” my knowledge. Or, I might do a good job of encoding, but the “transmission” was bad; e.g., static, gaps, noise, etc. might distort the signal. And, you might do a bad job of decoding. It’s an appealing model and helped engineers and computer scientists make advances in “communication theory” and helped make practical improvements in coding and so on.

As a general theory of how humans communicate, however, it’s vastly over-simplified. I argued that a better way of looking at human communication was as a design-interpretation process, not as an encoding-decoding process. One of the examples that pointed this out was a simple observation by Don Norman. Suppose someone comes up to you and asks, “Where is the Empire State Building?” You will normally give a quite different answer depending on whether they are in Rome, Long Island, or Manhattan. In Rome, you might say, “It’s in America.” Or, you might say, “It’s in New York City.” If you are on Long Island, you might well say, “It’s in Manhattan.” If you are already in Manhattan, you might say, “Fifth Avenue, between 33rd and 34th.” 

Photo by Matias Di Meglio on Pexels.com

Building on Don Norman’s original example, but based on your own experience, you can easily see that it isn’t only the geographical relationships that influence your answer. If you were originally from Boston, now on your own in Rome, struggling with Italian and homesick and someone came up to you and asked that question in American English with a Boston Accent, your response might be: “Are you joking? But how did you know I was an American. My name’s … “

On the other hand, if you’re a 13-year old boy in Manhattan — one with a mean streak — and someone asks you this question in broken English and they’re looking around like they are totally lost, you might say, “Oh, no problem. Just follow 8th Avenue, all the way north up to 133rd. It’s right there. You can’t miss it.” (Note to potential foreign visitors, most kids in Manhattan would not intentionally mislead you. But they point is, someone could. They are not engaging some automatic encoding process that takes their knowledge and translates into English. Absurd! 

You design every communication. I think that’s a much more useful way to conceive of communicating. Yes, of course, there are occasions when your “design” behavior is extremely rudimentary and seems almost automatic. It isn’t though. It just seems that way. Let’s go back to our question-asking example. Suppose you work at an information booth in New York City. People ask you this question day after day, year after year. You’re seemingly giving the answer without any attention whatsoever. Suppose someone asks you the question, but with a preface. “Look here, chap! I’ve got a gun! And if you give me the same stupid answer you’ve given me every time before, I’ll shoot your bloody brains out!” You are going to modify your answer. It only seemed as though it was automatic.

When you design your answer you take into account at least these things: some knowledge that you communication about, the current context (which itself has hundreds of potentially important variables), a model of the person you’re creating this communication for, a set of goals that you are trying to achieve (e.g., get them safely to their goal, mislead them, entertain them, entertain yourself, entertain the people around you, demonstrate your expertise, practice your diction, etc.). The process is inherently creative. In many circumstances (writing, playing, exploring, discovering, partying), you can choose how creative you want to make it. In other cases, circumstances constrain you more (though likely not so much as you think they do). 

Many readers think this is a classic example of a straw man argument. “No-one believes communication is a coding-decoding process.” 

Well, I beg to differ. I worked for relatively well-managed companies. I’ve talked to many other people who have worked in different well-managed companies. We’ve all seen or heard requests like this: “I need a paragraph (or a slide or a foil) on speech recognition. Thanks.” 

What??

Who’s the audience? Are they scientists, investors, customers, our management? How much do they already know? What are your goals? What other things are you going to talk about with them? The people who have left me such messages were all smart people. And, providing the necessary info only took a minute or two. But it critically improved the outcome. It’s not a straw man argument. 

Sit-com plots often hinge on the characters doing poorly at designing and/or interpreting communications. A show based on encoding-decoding? No. What could be funny — indeed what often is shown in comedy — are people failing to do good design and in the extreme case, that can arise by having an actual robot as a character or someone who behaves like one.

People also interpret what was said in terms of their goals, the context, what they believe about your goals and capacity, what they already know, and so on. And, even though this may seem obvious, millions of people believe what advertisers or politicians say without questioning their motives, double-checking with other sources, or even looking for internal inconsistencies in what is being touted as true. In other cases though, the same people will not believe anything the “other side” says no matter what. Just as one can do faulty design, one can also do faulty interpretation. 

In any case, I decided that it would be good to “show” in a controlled laboratory setting that the Encoding-Decoding model was woefully inadequate. So, I brought in “subjects” to work in pairs at a simple task about communicating Venn diagram relationships. The “designer” had a Venn diagram in front of them. “The “interpreter” was supposed to draw a Venn diagram. The “designer” was constrained to say something true and relevant. In addition to a “base” pay, the “interpreter” subjects would be given a bonus according to how many relationships matched those of the “designer.” The designer’s bonus depended on condition. In the “cooperation” condition, their payoff would also, like the interpreter’s, be determined by the agreement in the diagrams. In the “competition” condition, the designer’s bonus depended on how different the two diagrams were. 

Photo by August de Richelieu on Pexels.com

I ran about half the number of subjects I had planned to run when the experiment was ended by corporate lawyers. 

What? 

IBM had no unions at that time. And, they didn’t want any unions. One of their policies, which they believed, would help them prevent the formation of unions was that they never paid their workers for piece-work. Apparently, somehow, IBM CHQ had gotten wind of my experiment. People were being paid different amounts, based (partly) on their performance. They couldn’t have this! People might think were paying people for piece-work! 

It hardly needs to said, I suppose, that IBM definitely tried to pay for performance. This was true in sales, research, development, HR, management, and so on. No-one in IBM would argue that your pay shouldn’t be related to your performance. That was exactly — in one way of describing it — was going on here. By the way, these were not IBM employees and each subject only “worked” for about an hour.

Basically, regardless of how irrelevant this experimental set-up might have been to the genuine concern of unions not to pay people in an insanely aggressive and ever-changing piece-work scheme, the lawyers were concerned that it would be somehow misrepresented to workers or in the press and used as evidence that IBM should unionize. In a way, the lawyers were proving the point of the experiment in their own real-life behavior even as they insisted the experiment be shut down.



Lessons Learned: #1 Corporate lawyers are not only concerned about what you actually do or how you represent your work; they are also worried about how someone might misrepresent your work. 

Lessons Learned: #2 Even when constrained to say something true and relevant, ordinary people are quite capable of misleading someone else when it’s to their benefit and considered okay to do.

It is this second aspect of the experiment that I myself felt to be “edgy” at the time. Sure, people can mislead, but I was providing a context in which they were being encouraged to mislead. Was that ethical? Obviously, I thought it was at the time. On reflection, I still think it’s okay, but I’m glad that there are now review boards to look at “studies” and give a less biased opinion than the person who designed the study would do.

I view the overall context of doing the study as positive. As adults, these people all already knew how to mislead. I was letting them, and many other people, know that we know you know how to mislead and we’ll be on the lookout for it. 

What do other people think about studies wherein the experimenter encourages one person to deceive another? 

———————-

References published literature that describes some of the research that was done around that time. 

Malhotra, A., Thomas, J.C. and Miller, L. (1980). Cognitive processes in design. International Journal of Man-Machine Studies, 12, pp. 119-140.

Carroll, J., Thomas, J.C. and Malhotra, A. (1980). Presentation and representation in design problem solving. British Journal of Psychology/,71 (1), pp. 143-155.

Carroll, J., Thomas, J.C. and Malhotra, A. (1979). A clinical-experimental analysis of design problem solving. Design Studies, 1 (2), pp. 84-92.

Thomas, J.C. (1978). A design-interpretation analysis of natural English. International Journal of Man-Machine Studies, 10, pp. 651-668.

Thomas, J.C. and Carroll, J. (1978). The psychological study of design. Design Studies, 1 (1), pp. 5-11. 

———————

Other essays that touch on communication. 

Freedom of Speech is not a License to Kill

Ohayogozaimasu

The Sound of One Hand Clasping

Fool Me

Claude the Radioman

Know What? 

The Story of Story, Part 1

The Temperature Gauge

“Wizard of Oz”

18 Friday Jun 2021

Posted by petersironwood in Uncategorized

≈ 6 Comments

Tags

HCI, IBM, research, usability, UX, Wizard of Oz

(Some Lessons Learned from studies in Human-Computer Interaction/User Experience conducted at IBM Research in the mid-70’s.)

Photo by Johannes Plenio on Pexels.com

Wizard of Oz 

One of the studies I conducted at IBM Research in the mid 1970’s was part of an effort to do “Automatic Programming” — a department under Pat Goldberg. The first level manager I worked with was Irving Wladawsky (later Irving Wladawsky-Berger). His group wanted to develop a system that would allow the owner/operator of a small business to type requirements into a computer in English (or something English-like) and have the system itself produce RPG code to run the business so described. 

The underlying motivation from an IBM business perspective was that many small businesses could well afford a computer to do inventory, fulfill orders, etc. but they couldn’t afford to hire programmers to create such a system from scratch. The small business owner in the mid-1970’s did not program! Yet, for the most part, they understood how their business worked. The notion was that a natural language understanding and generation program could dialogue with the user/owner and through that process, understand their “business rules.” No costly programmers needed!

Photo by Dmitry Demidov on Pexels.com



An interesting side note: at that time, we were told that IBM corporate forbade us to use the terms “Artificial Intelligence” or “Robotics” to describe our work because some PR firm had determined that these terms were too scary for the general public. So, IBM had research in “mechanical assembly” but not “robotics.” We had work in character recognition, speech recognition, handwriting recognition, automatic program generation, and compiler optimization. But no work in “Artificial Intelligence.” (Wink, wink, nod, nod). 

Labelism: Confusing a thing with the label for that thing.

Another interesting side note: I worked at IBM Research for a dozen years; started an AI lab at NYNEX where I worked another 13 years; came back to IBM Research and several years later found myself working on the same problem! We were still trying to make a system to allow small businesses to generate their code automatically. In my second iteration, rather than using natural language, we were trying to make the specification of business rules in a graph language that was intuitive enough for business owners. This was a different approach, but trying to address the same underlying desire: to bring computing to small business without incurring the heavy costs of programming and maintenance. 

Let’s return to iteration one — the natural language approach @ 1975. Well, one issue was that no-one had a natural language program that even approximated being able to do the job. So…how to study people’s interaction with a system that doesn’t exist? 

We used an approach that my colleague Jeff Kelly called the “Wizard of Oz” technique; viz., use a human being (in this case, me) to simulate how the system might work and record people’s behavior. In this way, we could discover many of the issues that such a natural language programming system would have to deal with. I had already had plenty of experience interacting with a computer; and I had acting experience. I could “play the part” of a computer fairly well as I typed in my questions and answers. 

(Description of “The Wizard of Oz” technique).

IBM Research in Yorktown had roughly a thousand people including not only scientists, programmers, and engineers but also a number of business people (who did not know how to program). I knew some of them from playing tennis and table tennis and we used those folks as initial subjects. What did I find? Good news and bad news. 

Dealing with natural language is tricky for many reasons. One of those reasons is that English, including the English that people normally use to describe their business, is filled with words that have multiple meanings; e.g., “file”, “run”, “program”, “object”, “table”, etc. But here is the good news: although it’s true that many English words have many meanings, when these business people described business procedures, almost all of the lexical ambiguity vanished! The program to understand business English would not have to distinguish between a business file and a nail file; it wouldn’t have to worry about distinguishing a run in baseball or a run in stockings from a run of the payroll program; it wouldn’t have to distinguish between the table in a relational data base and the table in your dining room. The domain would mainly constrain! That’s the good news.

The bad news was dialogue management. How can the machine recognize a misunderstanding and how can it correct it? To make matters worse, while business people were fairly consistent in the way they described how their business ran, they were not consistent in how they talked about the communication. If a human being senses that another one is misunderstanding, then, depending on context they might: raise their eyebrows, say “Huh?”, “Come again?”, “What?” “I think I lost you.” “WTF?” “Are you kidding?”, “We’re on different wavelengths,” “I don’t get it.” “But…wait.” 

Photo by Nafis Abman on Pexels.com

Sometimes, these are referred to as “meta-comments.” Here’s a simple example that took place in the study. 

One of the business people told me about various discounts. I had assumed (playing the part of the computer) that he was talking about discounts for items that were being discounted due to inventory management. I recorded all the various percentages and so on. Then, he said, “Now, we also give discounts for various items.” 

At that time, most natural language systems of that era simply ignored words like “now” and “also” in this context. Stepping out of my role as a “computer system” and thinking about from the perspective of a human conversational partner though, these words are crucial! What it signals is a change in topic. In the larger context of our conversation, it shows that everything that had just been said, which I thought had been about item discounts, was not about item discounts!

This is just one example, but there were many more. In my more recent experience interacting with various computer dialogue systems, being able to recognize the signals of miscommunication and being able to repair misunderstandings is still not very well-handled more than four decades later.

I’d be interested in any pointers you have to a system that you think deals with meta-communication in a natural and robust manner. I do not think that it is beyond the pale of possibility. The general categories of the ways that people misunderstand each other is not infinite. John Anderson developed excellent tutoring systems for LISP and geometry and those systems worked something like human tutors in that, the tutor inferred the mental model of an individual student and focused instruction on correcting any misconceptions. My intuition is that a generic system built with equal complexity could deal with most of the issues as well as the average human being deals with them; i.e., imperfectly. 

—————————————

Lessons Learned: #1 You can test aspects of a system even before it’s built or even completely defined. One method that has been used many times: “Wizard of Oz.” 

Lessons Learned #2: Language used by professionals to talk about their domain is much more constrained in terms of lexical ambiguity than is language when considered by all native speakers.

Lessons Learned #3: People in “our culture” (i.e., US business culture) do not have an agreed upon and consistent vocabulary for talking about communication nor a consistent process for dealing with them.

Lessons Learned #4: Speaking of communication errors, I don’t recall why, but it was about this time, that I realized that my notion about how research results would be transferred to other parts of IBM was a complete and utter fantasy. I hadn’t articulated it, but it was basically that I would do research, write the results up for publication in scientific journals for an academic audience and publish Research Reports which would be eagerly consumed by anyone who needed to know. I’m not proud of this. LOL. But that’s really kind of how I viewed it. And, then, after a few years, I realized that it really mainly came about through relationships. That was something that people had been showing me all my life, but which I don’t think anyone ever stated it explicitly enough.

———————————-

Author Page on Amazon

The Myths of the Veritas (an exploration of leadership & ethics in free, no ads fiction)

Index to a Pattern Language for Collaboration and Teamwork

Experiences in Human-Computer Interaction

Post on “The Story of Story” 

Query By Example

15 Tuesday Jun 2021

Posted by petersironwood in Uncategorized

≈ Leave a comment

Tags

expertise, HCI, IBM, QBE, research, usability, UX

Photo by RF._.studio on Pexels.com

This is part of a series on experiences in my career in Human Computer Interaction and some lessons learned.

I joined IBM Research on the winter solstice of 1973. I had earned a Ph.D. in Experimental Psychology from the University of Michigan and for the previous few years, I had managed a research project at Harvard Medical School on the “Psychology of Aging.” At the time, I was married and had three small children. I mention this because I was funded by so-called “soft money” which basically meant that my salary depended on a research grant. I helped write a renewal of the grant but the decision was “deferred”; that is, it was neither funded nor unfunded. Then, it was deferred again. This meant that if the grant were not funded, I would only have a few weeks to find a new job. That seemed far too short so I began to look other places for a job. 

Lessons Learned: #1 If you want continuity of personnel in your laboratory, make sure you have overlapping and multiple grants or other sources of income. 

In this case, the grant actually was ultimately approved, but by that time, I had already agreed to join IBM Research. That turned out to be fine, by the way. It was a wonderful place to work.

One of the reasons that I got the job at IBM was that I already knew something about computers. I had taken several computer science courses in grad school along with the needed psych courses. More importantly, our “Psychology of Aging” study was run by a PDP-8 and I had programmed the computer to run our suite of experiments and to do data analyses on the results. I had taken a week-long course at DEC in Maynard, Massachusetts on the assembly language, another week-long course on the machine language, and another week-long course actually tracing the circuitry with a probe and oscilloscope. I felt I “understood” the PDP-8 at a fairly deep level. 

At IBM, I did not have that familiar machine. Instead, I was connected to a mainframe via a dumb terminal. The first day at IBM, I got my userid and tried to log on to APL (A programming language I had not used before). I tried following the manual but I could not seem to get logged on. After hours of trying, I finally gave up and went down to the computer room and found someone willing to help. I showed him the logon instructions I was trying to follow and he immediately said, “Oh, yeah, that doesn’t work any more. We changed that months ago. Here’s how you need to do it now.” The manual I had may have looked new, but it was out of date. 

Lessons Learned: #2 Manuals can be wrong. These days, most are online. But they can still be wrong.

Lessons Learned: #3 Someone who knows how to do something can save you hours with a few minutes of their time. 

Of course, it’s more respectful, efficient, and a better learning experience if you can figure it out on your own. But sometimes you can’t. My stumbling block was not due to an error in logic, or a lack of in-depth knowledge. It was simply that the computer center administrators had changed something arbitrary so that the documentation I was given about how to log on for the first time was no longer accurate. 

In order to teach myself APL, I wrote a very small program to “predict” how long I was going to live “based on” some behaviors that I was interested in controlling. My main goal was to learn APL. My secondary goal was to motivate myself, for instance, to exercise more, lose weight, and not drink too much alcohol. I had no intention or pretensions of making this prediction “accurate.” If I had been doing a consulting gig for an insurance company setting life insurance rates, for example, I would have given far more attention to see precisely what the real data were and incorporated many more variables into the regression model. 

Here’s a link, 

https://www.death-clock.org

by the way, to a more accurate model than the one I used, but it’s still simple to use. Note that my goal was to motivate myself and so I intentionally exaggerated the impact of those behaviors I was trying to change. I had programmed it. I knew how “bogus” the calculation was — nonetheless — here’s the interesting thing though: 

Lessons Learned #4: Even an over-simple model that the user knows is over-simple can still motivate change. 

Photo by Mike on Pexels.com

At last we come to the actual project I worked on — the usability and learnability of Query By Example. One of my colleagues, Moshe Zloof, invented the language for relational data bases. He had designed the language but not yet implemented it. I did not immediately test the design; first, I sought to understand it. In seeking to understand it in depth, prior to testing it, the two of us had some sense-making discussions. Moshe improved the design; in particular, our discussions uncovered some ambiguities and inconsistencies that were not at all obvious when he simply gave talks about the design. This brings me to the next lesson learned which has proven true in nearly every study of early stage designs that I’ve been involved with over the course of five decades.

Lessons Learned #5: Don’t just accept a surface description of something; understand it as deeply as you can before designing a study.  

In this particular case, it was possible for me to understand it in some depth. Relational data bases and second order logic are things I was capable of understanding. If it had been an interface to running a nuclear reactor or using the artificial heart that Moshe had designed earlier in his career, that would have been a much more difficult task for me.

I wanted to understand, not just the “logic” of Query By Example, but also possible contexts of use. For instance, my manager & I visited Burlington, Vermont to talk with IBMer’s who actually used query languages to understand what was happening in chip production lines. At one point, a particular production line that had been producing nearly 100% perfect chips starting having a much higher error rate. Using their query facility, they were quickly able to diagnose the cause of the change which was a supplier of one of the raw materials using a different source. In turn, this meant a slightly different profile of trace impurities in the substrate. Of course, this is only one example, but to me, understanding something in depth means not only understanding its internal logic but also understanding real users, their real tasks, and their context of use. 

Photo by Chokniti Khongchum on Pexels.com

I won’t go into all the details of the pencil & paper study or the results. High School students and then college students were taught the basics of the language and then given a simple relational data base and a set of questions stated in English which they had to translate into Query By Example. Briefly, the bottom line was that Query By Example was easy to learn and easy to use. However, there were still questions that people had difficulty with. In analyzing the data and doing some further experiments, the difficulties that people tended to have, stemmed not so much from Query By Example per se, but from what I much later came to call “labelism” — that is, confusing a label with the thing that label refers to. 

Here’s a simple example of the type of confusion we saw. In Query By Example (and other query languages) there is usually an OR operator and an AND operator. (These operators can be important for doing advanced queries with search engines as well). If you are interested in getting a list of pets you might adopt and you’re willing to adopt dogs or cats, you might ask for “cat OR dog.”  If you only want long-haired cats, you might ask for “cat” AND “long hair.” 

English, however, can be tricky.

If you and I (as opposed to you and a query language) are having a conversation, you might say, “I hear there are many pets that need to be adopted.” 

I say, “Yes, there are all kinds of pets. There are snakes, dogs, turtles, rabbits, cats…” 

You say, “Let me stop you right there. I’m only interested in adopting cats and dogs. Those are the only animals I’d want to adopt.” 

See what you said there? You exact words included: “…cats and dog.” If you put “Cats AND dogs” into a query against the data base of available pets, however, you will get the null set (that is, nothing) back. There are no animals who are both cats and dogs! (Though my part Main Coon cats do play fetch like dogs). 

When people were presented with an English statement that included the English word “and” — regardless of the actual syntax and context, some of them had difficulty using the OR operator. If instead, the query in English had set up like this: “Oh, I don’t want reptiles. I’d be happy with adopting a cat or a dog, however” then, they’d have no problem translating it into the OR operator in the query language. 

Lessons Learned: #6 Sometimes the difficulty people have in using a product, a service, or a prototype is not due to the interface details but with the structure of the task, their background, and their training.  

By analogy, you will not allow me to beat Nadal or Djokovic at tennis by giving me a better tennis racquet! (Although if you gave one of them a toothpick for a tennis racquet, I might have a shot).

Photo by Isabella Mendes on Pexels.com



That sounds obvious and even absurd, but I promise you, some companies get so greedy that they want you to design a system that allows people who do not understand the task and have minimal background and training to nonetheless be able to perform that task. 

One example you may have run into is having “help desk” personnel who have no understanding of a product go through a script to help you “solve your problem.” Sometimes, it works. But many times it doesn’t. When it does not work, you might not be able to “fix” the system by making the interface to the scripts easier to use for the help desk folks. The problem is much deeper (in some cases). Yes, a really bad interface can make it difficult even for a really knowledgeable and capable person to do the job. But often, even a really great interface cannot always substitute for actual expertise.

——————————————————-

Essays on America: Labelism 

Other posts on problem formulation: 

The Doorbell’s Ringing

Reframing the Problem

I Say Hello

I Went in Seeking Clarity

Who Knows What?

Measure for Measure

I’d Like Sauerkraut on the Ice Cream

14 Friday May 2021

Posted by petersironwood in Uncategorized

≈ 3 Comments

Tags

A/B testing, experiments, HCI, human factors, psychology, Study, UX

Which is Better? A or B?

Nearly everyone in the field of Human-Computer Interaction (related fields are known as Human Factors, and User Experience) has heard of A/B testing. How should we lay out our web pages? Should we have a tool bar? Should it be always visible or only visible on rollover? What type fonts and color schemes should we use? 

Clearly A/B testing is useful. However — there are at least two fundamental limitations to A/B testing. 

Photo by Darius Krause on Pexels.com

First, for almost any real application, there are way too many choices for each of them to be tested. This is where the experience of the practitioner and/or the knowledge of the field and of human psychology can be very helpful. Your experience and theory can help you make an educated guess about how to prioritize the questions to be studied. Some questions may have an obvious answer. Others might not make much difference. Some questions are more fundamental than others. For instance, if you decided not to use any text at all on your site, it wouldn’t matter which “font” your users prefer. 

Second, some decisions interact with others. For instance, you may test a font size in the laboratory with your friends. Just as you suspected, it’s perfectly legible. Then, it turns out that your users are mainly elderly people who use your app while going on cruises or bus tours. In general, the elderly have less acute vision that the friends you studied in the lab. Not only that, you were showing the font on a stable display under steady conditions of illumination. The bus riders are subject to vibration (which also makes reading more difficult) and frequent changes in illumination due to the sun or artificial light being intermittently filtered by trees, buildings, etc. Age, Vibration, and Illumination changes are variables that interact by being positively correlated. In other cases, variables interact in other and more complex ways. For example, increasing stress/motivation at first increases performance. But beyond a certain point, increasing stress or motivation actually decreases performance. This is sometimes known as the Yerkes-Dodson Law (https://en.wikipedia.org/wiki/Yerkes–Dodson_law)

The story doesn’t stop there though. How much stress is optimal partly depends on the novelty and complexity of the task. If it’s a simple or extremely practiced task, quite a bit of stress is the optimal point. Imagine how long you might hang on to a bar for one dollar, for a thousand dollars, or to save yourself from falling to the bottom of a 1000 foot ravine. For a moderately complex task, a moderate level of motivation is optimal. For something completely novel and creative, however, a low level of stress is often optimal. 

The real point isn’t about these particular interactions. The more general point is that testing many variables independently will not necessarily result in an optimal overall solution. Experience — your own — and the experiences of others — can help dissect a design problem into those decisions that are likely to be relatively independent of each other and those that must be considered together. 

Life itself has apparently “figured out” an interesting way to deal with the issue of the interaction of variables. Genes that work well together end up close together on the chromosome. That means that they are more likely to stay together and not end up on different chromosomes because of cross-over. By contrast, genes that are independent or even have a negative impact, when taken together, tend to end up far apart so that they are likely to be put on different chromosomes. 

Photo by Nigam Machchhar on Pexels.com

So, for example, one might expect that a gene for more “feather-like skin” and more “wing-like front legs” might be close to each other while a gene for thicker, heavier bones would be far away. 

Clearly, the tricky way variables interact isn’t limited to “User Experience Design” of course. Think of learning a sport such as tennis or golf. You can’t really learn and practice each component of a stroke separately. That’s not the way the body works. If you are turning your hips, for example, as you swing, your arm and hand will feel differently than if you tried to keep them still while you swung. 

Do you have any good tips for dealing with interactions of variables? In User Experience or any other domain? 

————————————

Some experiences in UX/HCI

Chain Saws Make the Best Hair Trimmers

In the Brain of the Beholder

Study Slain by Swamp Monster!

Buggy Whips to Finger Tips

Chain Saws Make the Best Hair Trimmers
In the Brain of the Beholder
Study Slain by Swamp Monster!
Buggy Whips to Fingertips

Career Advice from Polonius

20 Saturday Mar 2021

Posted by petersironwood in Uncategorized

≈ 1 Comment

Tags

career, HCI, human factors, usability, UX

Photo by Ott Maidre on Pexels.com

Career Advice from Polonius: To Thine Own Self be True

“To thine own self be true.” This advice comes from Polonius who is giving advice to his son in Act I, scene 3 of Shakespeare’s Hamlet. 

Polonius says: “This above all: to thine own self be true. And it must follow, as the night the day, Thou canst not then be false to any man.”

Let’s focus on the first part. 

One of the dreams of education is to customize teaching to the specific learning style(s) of individual students. This was a hot topic when I was in graduate school.

Photo by Suliman Sallehi on Pexels.com



Around 50 years ago. 

Some day, your grandchildren or your great-grandchildren may be the beneficiaries of learning experiences that are individualized to their specific styles. I wouldn’t hold my breath, but it could happen. It isn’t only a question of research on what various styles are and how to present material that resonates with these various styles. There is also the question of priorities and dollars and personnel. 

But meanwhile, here’s the good news. You don’t have to wait for another 50 years of research and a reshuffling of priorities so folx spend more money on education and less on, let’s say, cosmetics and professional sports. As I say, don’t hold your breath.

But let’s get back to the good news. The good news is that you can discover for yourself how to maximize your own learning as well as what your particular talents are. 

One cautionary note: Don’t be a jerk about it. If you’re in a group dealing with grief, don’t say, “Well, I learn best if a subject is reduced to a few hundred polynomial formulae. So, let’s start right there. Let’s reduce grief to three dimensions. Later, of course, we can do a proper multidimensional scaling exercise to determine the optimal number of dimensions.”

Photo by Andrea Piacquadio on Pexels.com



No. Don’t say that. Of course, you’re free to suggest that approach, but chances are, in this situation, and in most realistic group situations, you will be treated to information in the same manner as many others who have different styles from yours.

However, in many situations, you are, far and away the main important stakeholder. You can use your knowledge of how things work for you in order to strategize and plan how you will learn about things. You can organize and arrange your work so you’ll be more productive. 

Here’s a trivial example. I have learned that my eyes have a wisdom of their own. If, for instance, I’m going out for a walk around the garden to take some pictures of the sunset on the flowers, I grab my stuff and find myself turning and staring at the hat-rack on the way out the door. When I was younger, I would ignore this. But what I have learned is that my eyes are really good at knowing what to look at. So, even if I’m in a hurry, I take a moment to reflect on why my eyes are looking there. And, then, it comes to me. I’ll do better if I wear a brimmed hat to keep the sun out of my eyes while I look at my iPhone.



By paying attention to this little quirk, I’ve saved myself a lot of grief over the years; e.g., not left the house without my wallet, etc.

Here’s another example. I’m very good at seeing “patterns” emerge from a small number of examples or when there is considerable noise involved. This serves me well as my hearing diminishes because I can use top-down processing. Generally, but not always, I understand what people are saying. If I try to listen to a foreign language tape that is only isolated audio words, I have no hope of knowing what they are saying. “Key” “Tee”, “Pea”, sound exactly the same.

Seeing patterns easily is generally a nice capacity. However, I’m horrible at finding my own typos immediately after I write something. I actually “see” what I meant to type. A week later, I’m pretty good at catching the errors. If I had more patience, I would wait a week to proofread for every blog post, but being patient isn’t a strength either. I do go back over old posts occasionally and fix the typos (which I never saw at the time). 

Photo by Frank Cone on Pexels.com

When I go to the movies — remember when we used to go to movies? — anyway, if I went to a comedy, I was very likely to laugh too soon. I “hear” the punchline two lines earlier than it actually occurs. There’s no benefit to my laughing early! But that’s when the punchline hits me. I do keep it soft so as not to disturb the others in the audience. On the other hand, I’m pretty good at “discovering” the playing patterns of my tennis opponents and anticipating what they are going to do. Naturally, I don’t always guess right, but I do way better than chance.

I bring up these examples to illustrate a generality; that most of these individual differences have an upside and a downside. Mainly, learning about my own styles and capacities is something I learned well after leaving high school. That makes sense. In school — or at least, the schools I went to — everybody got the same instruction in the same way almost all the time. But as an adult, you often have a lot of control over your own timing, flow of information, etc. I think it’s worth your while to look back at your experience and discover what you have difficulty with, what you’re OK at and what you are exceptionally good at. When you have a choice, use the approach you’re really good at. 

—————————————-

More background on “knowing yourself” 

https://en.wikipedia.org/wiki/Know_thyself

History of Know Thyself
← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • dogs
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 661 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...