Tags

, , , ,

IMG_1172

Lately, I have been seeing a fair number of questions on Quora (www.quora.com) that basically question whether we humans wouldn’t be “better off” if AI systems do “take over the world.” After all, it is argued, an AI system could be smarter than humans. It is an interesting premise and one worthy of consideration. After all, it is clear that human beings have polluted our planet, have been involved in many wars, have often made a mess of things, and right now, we are a  mere hair’s breadth away from electing a US President who could start an atomic war for no more profound reason than that someone disagreed with him or questioned the size of his hands.

Personally, I don’t think that having AI systems “replace” human beings or “rule them” would be a good thing. There are three main reasons for this. First, I don’t think that the reason human beings are in a mess is because they are not intelligent enough. Second, if AI systems did “replace” human beings, even if such systems were not only more intelligent but also avoided the real reasons for the mess we’re in (greed and hubris, by my lights), they could easily have other flaws of equal magnitude. The third reason is simply that human life is an end in itself, and not a means to an end.  Let us examine these in turn.

First, there are many species of plants and animals on earth that are, by any reasonable definition, much less intelligent than humans and yet have not over-polluted the planet nor put us on the brink of atomic war. There are at least a few other species such as the dolphins that are about as intelligent as we are but who have not had anything like the world-wide negative ecological impact that we have. No, although we often run into individual people who act against our (and their own) interest, and it seems as though we (and they) would be better off if they were more intelligent, I don’t think lack of intelligence (or even education) is the root of the problem with people.

Here are some simple, everyday examples. I went to the grocery store yesterday. When I checked out, someone else packed my groceries. Badly. Indeed, almost every time I go to the store, they pack the groceries badly (if I can’t pack them myself). What do I mean by badly? One full bag had ripe tomatoes at the bottom. Another paper bag was filled with cans of cat food. It was too heavy for the handles. Another bag was packed lightly, but too full so that the handles would break if you hold the bag naturally. It might be tempting to think that this bagger was not very intelligent. I believe that the causes of bad packing are different. First, packers typically (but not universally) pay very little attention to what they are actually doing. They seem to be clearly thinking about something other than what they are doing. Indeed, this described a lot of human activity, at least in the modern USA. Second, packers are in a badly designed system. Once my cart is loaded up, another customer is already having their food scanned on the conveyer belt and the packer is already busy. There is no time to give feedback to the packer on the job they have done. Nor is the situation really very socially appropriate. No matter how gently done, a critique of their performance in front of their colleagues and possibly their manager will be interpreted as an evaluation rather than an opportunity for learning. Even if I did give them feedback, they may or not believe it. It would be better if the packer could follow me home and observe for themselves what a mess they have made of the packing job. I think if they did that a few times, they’d be plenty smart enough to figure out how to pack better.

Unfortunately, packing is not the only example of this type of system. Another common example is that programmers develop software. These people are typically quite intelligent. But they often build their software and never get a chance to see their software in action. Many organizations do not carry out user studies “in the wild” to see how products and services are actually used. It isn’t that the software builders are not smart. But it is problematic that they do not get any real feedback on their decisions. Again, as in the case of the packers, the programmers exist in an organizational structure that makes honest feedback about their errors far too often seem like an evaluation of them, rather than an occasion for learning.

A third example are hotel personnel. A hotel is basically a service business. The cost of the room is a small part of the price. A hotel exists because it serves the customers. Despite this, people behind the desks seldom have incentives and mechanisms to hear, understand and fix problems that their customers encounter. A quintessential example came in Boston when my wife and I were there for a planning meeting for a conference she would be chairing in a few months. When we checked out, the clerk asked whether everything was all right. We replied that the room was too hot but we couldn’t seem to get the air conditioning to work. The clerk said, “Oh, yes! Everyone has that problem. You need to turn on the heater for the A/C to work.” This was a bad temperature control design for starters, but the clerk’s response clearly indicated that they were aware of the problem but had no power (and/or incentive) to fix it.

These are not isolated examples. I am sure that you, the reader, have a dozen more. People are smart enough to see and solve the problems, but that is not their job. Furthermore, they will basically get “shot down” or at best ignored if they try to fix the problem. So, I really don’t think the issue is that people are not “smart enough” to fix many of the problems we have individually.  It is that we design systems that make us collectively not very smart. (Of course, in outrageous cases, even some individual humans are so prideful that they cannot learn from honest feedback from others).

Now, you could say that such systems are themselves a proof that we are not smart enough. However, that is not a very good explanation. There are existence proofs of smarter organizations. The sad part is that they are exceptions rather than rules. In my experience, what keeps people from adopting better organizations; e.g., where people are empowered to understand and fix problems, are hubris and greed, not a lack of intelligence.

Firstly, in many situations, people believe that they already know everything they need in order to do their job. They certainly don’t want public feedback indicating that they are making mistakes (i.e., could improve) and this attitude spreads to their processing of private feedback. You can easily imagine a computer programmer saying, “I’ve been writing code for User Interfaces for thirty years! Now, you’re telling me I don’t know how?” Why can we imagine that so easily? Because the organizations that most of us live in are not organizations where learning to improve is stressed.

In many organizations, the rules, processes, and management structure make very little sense if the main goal is to make the organization as effective as possible. Instead, however, they make perfect sense if the main goal of the organization is to keep the people who have the most power and make the most money to keep having the most power and making the most money. In order to do that in an ongoing basis, it is true that the organization must be minimally competent. If they are a grocery store, they must sell groceries at some profit. If they are a software company, they need to produce some software. If they are a hotel, they can’t simply poison all their potential guests. But to stay in business, none of these organizations must do a stellar and ever-improving job. 

So, from my perspective, the reason that most organizations are not better learning organizations is not that we humans are not intelligent enough. The reason for marginally effective organizations is that the actual goal is mainly to keep people at the top in power. Greed is the biggest problem with people, not lack of intelligence. History shows us that such greed is ultimately self-defeating. Power corrupts all right, and eventually power erodes itself or explodes itself in revolution. But greedy people continue to believe that they can outsmart history. Dictators believe that they will not suffer the same fate as Hitler or Mussolini. CEO’s believe their bad deeds will go unpunished (indeed, often that’s true). So-called leaders often reject criticism by others and eventually spin out of control. That’s hubris.

I see no reason whatever to believe that AI systems, however intelligent, would be more than reflections of greed and hubris. It is theoretically possible to design AI systems without hubris and greed, but it is also quite possible to develop human beings where hubris and greed are not predominant factors in people’s motivation. We all know people who are eager to learn throughout life; who listen to others; who work collaboratively to solve problems; who give generously of their time and money and possessions. In fact, humans are generally very social animals and it is quite natural for us to worry more about our group, our tribe, our country, our family than our own little ego.  How much hubris and greed are in an AI system will very much depend on the nature and culture of the organization that builds it.

Next, let us consider what other flaws AI systems could have.

Author Page on Amazon