My degrees are in psychology. I have also been fascinated by computers. One main reason I went into HCI/UX/Human Factors was that I saw computers as devices that would amplify collective human intelligence. Thereby, with a mixture of people and computers, we would be able to solve such complex problems as world hunger, overpopulation, disease, global climate change, wars, and so on. I definitely saw myself as most interested in the people side though I thought comparing and contrasting computers and people shed new light on the people side. If you only have one type of computational mechanism; viz., us, then it’s hard to know how much of what happens in trying to solve a problem is because of our common human heritage and hardware and how much is intrinsic to the problem.
This interest in the novel light that computing could shine on human intellect was what initially drew me to computers, but I later saw them as fascinating in their own right as well as being extremely important tools for a psychologist. For example, I used a PDP-8 to run experiments on the psychology of aging and to analyze the data. Only when I joined IBM did I begin to change my focus from how computers could be useful tools for psychologists, but how psychology could be useful tools for improving computers (or at least the actual performance of the computer in doing useful work when used by a person).
Although I took a number of programming courses, I only ever became an amateur programmer. My main method for programming some task was to think about how I would do it and then step by step, make the computer do it. This process has many limitations, a few of which are obvious even to me. For example, when doing my dissertation work, I had the computer register the time whenever any one of five subjects made a response. While sitting in the computer room (while the subjects were in their booths), I was sitting and reading something while the disk kept buzzing next to me: Bz-b-bz-bz. Bz-b-bz-bz. Bz-b-bz-bz.
I had used my “What would John do?” method of programming. If I saw a long number and had to go write it down, I would want to do it immediately, and then be ready for the next number. But this was insane for the computer! The computer could “remember” hundreds of these numbers and then write them out to the disk en masse. Anyone who had gone through even an introductory programming course would approach the problem differently than I had — at least until the computer used its disk buzzing to wake me up to its modus operandi which are really quite different from mine.
Like every other human, I make mistakes all the time in every sort of endeavor. For example, I like to play tennis and I like to hit a serve that’s hard to return. So, I am typically trying to serve to a particular spot. I’m not dead on accurate. I might miss long or wide by a couple inches or hit the net. But I will not (or at least haven’t yet) turned around and sailed the ball out of the court behind me. Nor have I ever yet struck the ball straight down at my feet. Nor, have I tossed the ball sideways into the screen and then swung anyway (!), and accidentally let go and flung the racquet across the net. But if you have ever programmed a computer, you know any of these behaviors might be possible based on the slightest error you can imagine.
It is ironic because most people think people are unreliable while computer are reliable. Well, it’s not that simple. Most people are pretty reliable most of the time and especially when they are acting within their bailiwick. Yes, they slip up and make mistakes but they are usually (not always) both understandable and fixable. A computer can do anything. The hardware is typically reliable but can still fail. Much more likely is that there are differences between what the programmer thought she or he was telling the computer and what the programmer actually told it to do. But wait! There’s more! Even more likely is that the intent of the programmer solves only a small part of the overall problem, solves the wrong problem, or actually makes the situation worse. That is not — or at least not solely — the fault of the programmer (more likely, the fault of an entire bureaucratic process).
This kind of weird and catastrophic error appeared in the program that ran my dissertation experiment at Michigan. Worse, it was a different weird and catastrophic error that appeared every time I ran the program! Often, the program would run correctly for five minutes or fifty minutes and then – BANG – unrecoverable error.
The program was in FORTRAN 2. Someone had added some useful macro functions for doing experiments. For instance, there were a number of initializations for the displays. We had five displays so these functions all had the form FUNCTION1(2) which applied the function1 to the second display. To make it even more convenient, if you wanted to do the same thing to all five displays (which was always the case for me), you could simply pass it the argument (7) and the macro code would apply it to all five displays. So, I had a list of about 5-6 commands of that form: Function1(7), Function2(7), Function3(7) etc. Having initialized the displays, the next thing on my agenda was to initialize the array that held the timing information. Since I wanted to do this for all five of the arrays, it seemed as easy as rolling off a cliff to use the (7) convention and thereby apply it to all five reaction time arrays. In more modern version of FORTRAN, they won’t allow you to do that (you will get a compile time error). But back when Joy to the World by Three Dog Night topped the charts, there was no error message at compile time. Secretly, of course, you just know that compiler was snickering as it thought: “Oh, you want to write some time stamp into the seventh element of a five element array? Fine. The customer is always right. Be my guest. Good luck with that.” This is the computer trying to “serve” and instead smashing the ball directly into the ground.
Yet, keep in mind that there are some (not all) very rich and powerful people out there who sincerely wish that “people” could just be more like computers and do precisely as they’re told, always, and without question. And, when I say there are “people” they want to control like a computer, I mean you. That is exactly what they want. For you to do what they insist you do. They are about to get away with it – and if they do, there will be no Joy to the World – not for a very long time. Because if someone else lays out all the choices for you, you are not living your life at all. You are a tool in their life.
It isn’t even really a good system for them. Willing collaborations yield insights and creativity and productivity. It is precisely what has taken us from buggy whips to fingertips in an astoundingly short time. Society and technology and learning progressed at a snail’s pace in Medieval times. I don’t mean those really speedy thoroughbred racing snails either; I’m referring to the garden variety garden snail. A politician who has competition will want to show some sort of real progress. But a dictator? Maybe if they are particularly partial to scientific advancement or the fine arts, they might throw a few dollars that way. And some have. But many have not. What they typically put time, energy and thought into is war and the weapons of war.
Now, instead of, or at least in addition to, having computers help provide a coordinating infrastructure of knowledge so that human beings can collaborate and solve more interesting problems as I had initially hoped a half century ago, computers and social media are being used to trick people into denying the validity of their own experience and existence. How do we debug this situation before it’s too late? I sometimes think that part of the problem is that we have tried to jam seven elements of serious social and technological change into an array that can only hold five elements. But maybe that’s irrelevant. What is relevant is that people are at their best when they are free to be people and at their worst when they are made to pretend that they are machines.