Tags

, , , , , , ,

IMG_7123

(Is the level of beauty in a rose higher or lower than that of a sonnet?)

An interesting sampling of thoughts about the future of AI, the obstacles to “human-level” artificial intelligence, and how we might overcome those obstacles is found in the business week article with a link below).

I find several interesting issues in the article. In this post, we explore the first; viz., the idea of “human-level” intelligence implicitly assumes that intelligence has levels. Within a very specific framework, it might make sense to talk about levels. For instance, if you are building a machine vision program to recognize hand-printed characters, and you have a very large sample of such hand printed characters to test on, then, it makes sense to measure your improvement in terms of accuracy. However, humans are capable of many things, and equally important, other living things are capable of an even wider variety of actions. Does building a beehive a “higher” or “lower” level of intelligence than creating a tasty omelet out of whatever is left in the refrigerator or improvising on the piano or figuring out how to win a tennis match against a younger, stronger opponent? Intelligence can only be “leveled” meaningfully within a very limited framework. It makes no more sense to talk about “human-level” intelligence than it does to talk about “rose-level” beauty. Does a rainbow achieve something slightly less than, equal to, or greater than “rose-level” beauty? Intelligence is a many-splendored thing and it comes in myriad flavors, colors, shapes, keys, and tastes. Even within a particular field like painting or musical composition, not everyone agrees on what is “best” or even what is “good.” How does one compare Picasso with Rembrandt or The Beatles with Mozart or Philip Glass?

It isn’t just that talking about “levels” of intelligence is epistemologically problematic. It may well prevent people from using resources to solve real problems. Instead of trying to emulate and then surpass human intelligence, it makes more practical sense to determine the kinds of useful tasks that computers are particularly well-suited for and that people are bad at (or don’t particularly enjoy) and build programs and machines that are really good at those machine-oriented tasks. In many cases, enlightened design for a task can produce a human-computer system with machine and human components that is far superior than either separately both in terms of productivity and in terms of human enjoyment.

Of course, it can be interesting and useful to do research about perception, motion control, and so on. In some cases, trying to emulate human performance can help develop practical new techniques and approaches to solving real problems and helps us learn more about the structure of task domains and more about how humans do things. I am not at all against seeing how a computer can win at Jeopardy or play superior Go or invent new recipes or play ping pong. We can learn on all three of the fronts listed above in any of these domains. However, in none of these cases, is the likely outcome that computers will “replace” human beings; e.g., at playing Jeopardy, playing GO, creating recipes or playing ping pong.

The more problematic domains are jobs, especially jobs that people perform primarily or importantly to earn money to survive. When the motivation behind automation is merely to make even more money for people who are already absurdly wealthy while simultaneously throwing people out of work, that is a problem for society, and not just for the people who are thrown out of work. In many cases, work, for human beings, is about more than a paycheck. It is also a major source of pride, identity and social relationships. To take all of these away at the same time a huge economic burden is imposed on someone seems heartless. In many cases, the “automation” cannot really do the complete job. What automation does accomplish is to do part of the job. Often the “customer” or “user” must themselves do the rest of the job. Most readers will have experienced dialing a “customer service number” which actually provides no relevant customer service. Instead, the customer is led through a maze of twisty passages organized by principles that make sense only to the HR department. Often the choices at each point in the decision tree are neither complete nor disjunctive — at least from the customer’s perspective. “Please press 1 if you have a blue car; press 2 if you have a convertible; press 3 if your car is newer than 2000. Press 4 to hear these choices again.” If the company you are trying to contact is a large enough one, you may be able to find the “secret code” to get through to a human operator, in which case, you will be into a queue approximately the length of the Nile.

After being subjected to endless minutes of really bad Musak, interrupted by the disingenuous announcement: “Please stay on the line. Your call is important to us” as well as the ever-popular, “Did you know that you can solve all your problems by going on line and visiting our website at www.wedonotcareafigsolongaswesavemoneyforus.com/service/customers/meaninglessformstofillout”? This message is particularly popular for companies who provide internet access because often you are calling them precisely because you have no internet access. Anyway, the point is that the company has not actually automated the service but automated a part of the service causing you further hassles and frustration.

Some would argue that this is precisely why progress in artificial intelligence could be a good thing. AI would allow you to spend less time listening to Musak and more time interacting with an agent (human or computer) who still cannot really solve your problem. What is even more fascinating are the mathematical calculations behind the company’s decision to buy or develop an AI system to help you. Calculating the impact of poor customer service on their customer retention rates is tricky so that part is typically just not done. The cost savings due to firing 10 human operators including overhead they might calculate to be $500,000 while the cost of buying or developing an AI system might be only $2,000,000. (Incidentally, $100K could easily improve the dialogue structure above, but almost no-one does that. It would be like washing your hands to help prevent the flu when instead you can buy an expensive herbal supplement).So, it seems as though, it would only take four years to reach a break-even point on the AI project. Not bad. Except. Except that software systems never stay stable for four years. There will undoubtedly be crashes, bug fixes, updates, crashes caused by bugs, updates to fix the bugs, crashes caused by the bugs in the updates to fix the bugs, and security breaches and viruses requiring the purchase of still more software. The security software will likely cause the updates to fail and soon, additional IT staff will be required and hired. The $500K/year spent on people to answer your queries will be saved but by year four, the IT staff payroll may well have grown to $4,000,000 per annum.

My advice to users of such systems is to comfort themselves with the knowledge that, although the company replaced their human operators in order to make more money for themselves, they are probably losing money instead. Perhaps that thought can help sustain you through a very frustrating dialogue with an “Intelligent Agent.” Well, that plus the knowledge that ten more people have at least temporarily lost their livelihood.

The underlying problems here are not in the technology. The problems are greed, hubris, and being a slave to fashion. It is never enough for a company to be making enough money any more than it is enough for a dog to have one bone in its mouth. As the dog crosses a bridge, he looks into the river below and sees another dog with a bone in its mouth. The dog barks at the other dog. In dog language, it says, “Hey! I only have one bone. I need two. Give me yours!” Of course, the dog, by opening its mouth, loses the bone it already had. That’s the impact of being too greedy. A company has a pre-eminent position in some industry, and makes a decent profit. But it isn’t enough profit. It sees that it can improve profit simply by cutting costs such as sales commissions, travel to customer sites, education for its employees, long-term research and so on. Customers quickly catch on and move to other vendors. But this reduces the company’s profits so they cut costs even more. That’s greed.

And, then there is hubris. Even though the company might know that the strategy they are embarking on has failed for other companies, this company will convince itself that it is better than those other companies and it will work for them. They will, by God, make it work. That’s hubris. And hubris is also at work in thinking that systems can be designed by clever engineers who understand the systems without doing the groundwork of finding out what the customer needs. That too is hubris.

And finally, our holy trinity includes fashion. Since it is fashionable to replace most of your human customer service reps with audio menus, the company wants to prove how fashionable it is as well. It doesn’t feel the need for actually thinking about whether it makes sense. Since it is fashionable to remind customers about their website, they will do it as well. Since it is now fashionable to replace the rest of their human customer service reps with personal assistants, this company will do that as well so as not to appear unfashionable.

Next week, we will look at other issues raised by “obstacles” to creating human-like robots. The framing itself is interesting because by using the word “obstacles,” the article presumes that “of course” society should create human like robots and the questions of importance are simply what are the obstacles and how do we overcome them. The question of whether or not creating human like robots is desirable is thereby finessed.

—————————-

Follow me on twitter@truthtableJCT

Turing’s Nightmares

See the following article for a treatment about fashion in consumer electronics.

Pan, Y., Roedl, D., Blevis, E., & Thomas, J. (2015). Fashion Thinking: Fashion Practices and Sustainable Interaction Design. International Journal of Design, 9(1), 53-66.

The Winning Weekend Warrior discusses strategy and tactics for all sports — including business. Readers might also enjoy my sports blog

http://www.businessinsider.com/experts-explain-the-biggest-obstacles-to-creating-human-like-robots-2016-3