The name of B. F. Skinner is not often invoked in discussions of User Experience. There are limitations to his basic theory, but perhaps it is time to revisit the baby that was, by many, thrown out with that bathwater. Skinner’s approach has two major limitations that come to mind. First, human behavior is moderated by internal structures such as beliefs, framings, assumptions, labels, and so on. If one ignores these cognitive structures and tries to predict human behavior solely on the basis what behavior is “reinforced” those prediction efforts will often fail.
Second, humans (and other animals) are not born as “blank slates.” We have a number of inborn predispositions. For example, if you “punish” a rat by shocking the rat after performing a particular action, it will quickly learn to avoid that action. Similarly, if you feed the rat a food with a distinctive taste and that food will make it nauseous, it will quickly learn to avoid that taste. Some readers may have also experienced this. If your first experience with oysters, say, made you vomit, you may never try them for the rest of your life.
On the other hand, if you shock the rat after it tastes something, that is a much harder association to master. Or, if you make the rat nauseous after pressing a level, that is also a difficult association to learn.
Despite these and other limitations of the strictly Skinnerian approach, reinforcement still works in many situations. It is even possible for people to “learn” a behavior because of reinforcement and to learn without any conscious awareness of the fact that they are being trained in this way.
As an undergraduate at Case-Western Reserve, one of my psychology courses (on learning) was taught by a “Skinnerian.” He was an excellent instructor and I enjoyed the course immensely. Since we had a syllabus, I knew exactly when he would be lecturing on the topic of “unconscious conditioning.” As usual, almost the entire class was seated before the Professor arrived. This gave me time to explain to my classmates my idea for what the class would do: condition him to stand in a corner and comb his hair with his hand.
I think you will appreciate the fun here. He was giving a lecture about how people could be conditioned without awareness and while he was doing that, we were going to condition him without his awareness.
I will return momentarily to explain just how we managed, but first, I must explain the concept of “shaping.” Shaping is an extremely important concept for training your pets or your kids to behave in convenient or useful ways. Basically, the idea is to begin by reinforcing any behavior that is in the direction that you want the behavior to go in. If you want to potty train your child, for instance, you don’t initially wait for complete success. You praise the child even if he or she sits on the potty. You praise the child if they try to make it to the bathroom but have an “accident” on the way. Gradually, your criteria become stricter and you only reinforce behavior closer and closer to the goal. Similarly, if you want to train your dog to “shake hands” you initially praise them for anything even close. For instance, if you put your hand out and they lift their paw even slightly off the ground, you praise them. As they become more adept, you change your criteria for reward. Eventually, you only reinforce them then when they are “shaking hands” as well as you think possible.
Now, let’s return to the Skinnerian Professor and his lecture on unconscious conditioning. If the class had waited until he stood in the corner and combed his hair with his hand, we never would have succeeded. At first, we only reinforced him (by looking up and looking very eager and interested) when he stood to the right of his lectern (from our perspective). Then, we only began to reinforce him when he was near the corner. Then, we only began to reinforce him when he raised his hand slightly. Then, we only acted eager and interested when his hand went up toward his head or face. Finally, we only acted eager and interested when he stood in the corner and combed his hair with his hand. It took almost the entire hour for this to work. I kept a written record of his behavior and noted the times and the changes in our criteria for success. When the lecture was over, I walked up and explained what we had done. I showed him my records.
He was astounded! It was awesome. I suppose theoretically, he could have been putting it on, but if he was, it was the best acting performance I saw in all my years attending Eldred Theater or the Cleveland Playhouse. Once he recovered from his initial shock though, to his credit, he didn’t get angry with me or deny the reality of what had actually happened. He just nodded and said (essentially). “Yeah. This stuff really works.”
Yeah. It really does. And, although B. F. Skinner’s approach to human behavior is overly simplistic, it still does work in many circumstances, including a human’s behavior interacting with a computer system.
Another important contribution of B. F. Skinner is his work on “schedules of reinforcement.” It’s worth understanding this in some detail, but for now, I just want to focus on one aspect. The “schedule of reinforcement” that leads to the highest rates of behavior is to reinforce, not every time a desired behavior is observed, but very seldom and randomly.
Las Vegas comes to mind. I only visited once. For a time, I watched people at the slot machines. Unlike the people who win on TV commercial casinos, in the “real world” (to the extent Las Vegas is part of the “real world”), I saw four people win and not a single one of them showed even a glimmer of pleasure. They simply started feeding their winnings back into the machine. This is how the Casinos make their money. Basically, they are using “a thin Variable Ratio Schedule” to get people hooked on behavior that is statistically guaranteed not to be in their financial interest. (Of course, it’s understood, people also gamble for fun. That’s okay.) But if you are gambling against the Casinos in order to make money. Well….good luck. Remember: they have chauffeurs, yachts, and mansions. You do the math. They say, “What happens in Vegas, stays in Vegas.” But they might instead say, “Money that comes to Vegas, stays in Vegas.”
In most User Experience contexts, we want the relationship between what the user does and the consequences to be consistent. I say “most” because, when it comes to games or learning experiences or computer art, this is not always true. In most work-related applications however, we want the user to be reinforced every time he or she takes appropriate action. I think that’s what designers mainly strive for.
I believe, however, that the users themselves sometimes fall into doing something that only occasionally pays off. For example, let’s say a college student has ground floor dorm room and connects to the internet via satellite. The connection is flakey. Sometimes it works. Sometimes it doesn’t. But it works much better on the top floor. So, when he or she tries to access the Internet and fails, they walk the three flights of stairs and they successfully connect every single time. But walking three flights of stairs is a pain so the student in question decides it might help if they close all their windows before trying to connect. Now, say that didn’t work so they decided to close all the windows and then reboot. Well! What do you know!? It worked. So, the next time they have trouble connecting, instead of walking up three flights of stairs, the student closes all the windows and reboots the machine. Maybe, by sheer chance, it worked again!
The next time, perhaps it doesn’t work. But three flights of stairs? That’s a long ways. So, they try again! It works!
But winter is coming.
And with winter comes rain. And with rain comes worse connectivity to the satellite. So, as the rain goes from San Diego rain frequency to San Francisco rain to Portland rain to Seattle rain frequency, our student becomes less and less successful in actually connecting. But every so often, the procedure works. They may have rebooted their machine ten times before it finally connected, but that just makes it more likely they will be willing to try twenty times before giving up. Furthermore, the student may have “given up” on even considering walking up 3 flights of stairs. Having the persistent habit of rebooting the machine multiple times actually prevents the student from either doing the thing that has always worked or working toward a more permanent solution (e.g., changing rooms, getting an antennae, finding a wired hookup, etc.).
It is possible in this way to train a rat or a pigeon or a college student — or even a professor — to do something they would not have consciously chosen to do — or to do something at a much greater frequency than they would have chosen to do it.
Next (Perhaps): “The Skinner Box and the Box Next Door.”