AI, Artificial Intelligence, cognitive computing, emotional intelligence, ethics, the singularity, Turing, user experience
The Pros and Cons of AI Part Three: Artificial Intelligence
We have already shown in the two previous blogs why it more effective and efficient to replace eating with Artificial Ingestion and to replace sex with Artificial Insemination. In this, the third and final part, we will discuss why human intelligence should be replaced with Artificial Intelligence. The arguments, as we shall see, are mainly simple extrapolations from replacing eating and sex with their more effective and efficient counterparts.
Human “intelligence” is unpredictable. In fact, all forms of human behavior are unpredictable in detail. It is true that we can often predict statistically what people will do in general. But even those predictions often fail. It is hard to predict whether and when the stock market will go up or down or which movies will be blockbuster hits. By contrast, computers, as well know, never fail. They are completely reliable and never make mistakes. The only exceptions to this general rule are those rare cases where hardware fails, software fails, or the computer system was not actually designed to solve the problems that people actually had. Putting aside these extremely rare cases, other errors are caused by people. People may cause errors because they failed to read the manual (which doesn’t actually exist because to save costs, vendors now expect that users should look up the answers to their problems on the web) or because they were confused by the interface. In addition, some “errors” occur because hackers intentionally make computer systems operate in a way that they were not intended to operate. Again, this means human error was the culprit. In fact, one can argue that hardware errors and software errors were also caused by errors in production or design. If these errors see the light of day, then there were also testing errors. And if the project ends up solving problems that are different from the real problems, then that too is a human mistake in leadership and management. Thus, as we can see, replacing unpredictable human intelligence with predictable artificial intelligence is the way to go.
Human intelligence is slow. Let’s face it. To take a representative activity of intelligence, it takes people seconds to minutes to do simple square roots of 16 digit numbers while computers can do this much more quickly. It takes even a good artist at least seconds and probably minutes to draw a good representation of a birch tree. But google can pull up an excellent image in less than a second. Some of these will not actually be pictures of birch trees, but many of them will.
Human intelligence is biased. Because of their background, training and experience, people end up with various biases that influence their thinking. This never happens with computers unless they have been programmed to do something useful in which case, some values will have to be either programmed into it or learned through background, training and experience.
Human intelligence in its application most generally has a conscious and experiential component. When a human being is using their intelligence, they are aware of themselves, the situation, the problem and the process, at least to some extent. So, for example, the human chess player is not simply playing chess; they are quite possibly enjoying it as well. Similarly, human writers enjoy writing; human actors enjoy acting; human directors enjoy directing; human movie goers enjoy the experience of thinking about what is going on in the movie and feeling, to a large degree, what people on the screen are attempting to portray. This entire process is largely inefficient and ineffective. If humans insist on feeling things, that could all be accomplished much more quickly with electrodes.
Perhaps worst of all, human intelligence is often flawed by trying to be helpful. This is becoming less and less true, particularly in large cities and large bureaucracies. But here and there, even in these situations that should be models of blind rule-following, you occasionally find people who are genuinely helpful. The situation is even worse in small towns and farming communities where people are routinely helpful, at least to the locals. It is only when a user finds themselves interacting with a personal assistant or audio menu system with no possibility of a pass-through to a human being that they can rest assured that they will not be distracted by someone actually trying to understand and help solve their problem.
Of course, people in many professions, whether they are drivers, engineers, scientists, advertising teams, lawyers, farmers, police officers etc. will claim that they “enjoy” their jobs or at least certain aspects of them. But what difference does that make? If a robot or AI system can do 85 to 90% of the job in a fast, cheap way, why pay for a human being to do the service? Now, some would argue that a few people will be left to do the 10-15% of cases not foreseen ahead of time in enough detail to program (or not seen in the training data). But why? What is typically done, even now, is to just the let user suffer when those cases come up. It’s too cumbersome to bother with back-up systems to deal with the other cases. So long as the metrics for success are properly designed, these issues will never see the light of day. The trick is to make absolutely sure than the user has no alternative means of recourse to bring up the fact that their transaction failed. Generally, as the recent case with Yahoo shows, even if the CEO becomes aware of a huge issue, there is no need to bring it to public attention.
All things considered, it seems that “Artificial Intelligence” has a huge advantage over “Natural Intelligence.” AI can simply be defined to be 100% successful. It can save money and than money can be appropriately partitioned to top company management, shareholders, workers, and consumers. A good general formula to use in such cases is the 90-10 rule; that is, 90% of the increased profits should go to the top management and 10% should go to the shareholders.
As against increased profits, one could argue that people get enjoyment out of the thinking that they do. There is some truth to that, but so what? If people enjoy playing doctor, lawyer, and truck driver, they can still do that, but at their own expense. Why should people pay for them to do that when an AI system can do 85% of the job at nearly zero costs? Instead of worrying about that, we should turn our attention to a more profound problem: what will top management do with that extra income?