• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Monthly Archives: June 2016

Deconstructing the job-based economy. 

29 Wednesday Jun 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, automation, cognitive computing, ethics, jobs, sports, the singularity, Turing

IMG_6925

Recently, various economists, business leaders, and twitterists have opined about the net result of artificial intelligence and robotics on jobs. Of course , no-one can really predict the future. (And, that will remain true, even should a “hyper-intelligent AI system” evolve). The discussion does raise interesting points about the nature of work and what a society might be like if only a small fraction of people are “required” to work in order to meet the economic needs of the population.

As one tries to be precise, it becomes necessary to be a little clearer about what is meant by “work”, “the economic needs” and “the population.” For example, at one extreme, one can imagine a society that requires nearly everyone to work, but only between the ages of 30-50 and only for a few hours a week. This would allow the “work” to be spread widely through the population. Or, one could imagine “work” in which everyone and not just a few researchers and academics, would be encouraged to spend at least 50% of their time continuing to monitor and improve their performance, take courses, do actual research, take the time to communicate with users, etc. Alternatively, one could imagine a society in which only 1/10 to 1/3 of the people worked while others did not work at all. In still another version, rather than have long-term jobs, people have a way of posting needs for very small, self-contained tasks, and people choose ones that they want in return for credits which can be used for various luxuries.

When we speak of economic “needs,” we might do well to distinguish between “needs” and “wants” although these are not absolutely well-defined categories. We need nutrition and have no need for refined sugar, but to most people, it tastes good so we may well “want” it. We can imagine, that at one extreme, the economy produces enough of some bland substance like “Soylent Green” to provide everyone’s nutritional needs but no-one ever gets a gourmet meal (or even a burger with fries). It gets rather fuzzier when we discuss “contingent needs.” No-one “needs” a computer after all in order to live. However, if you “must” do a job, you may well “need” a computer to do that job. If you want to live a full life, you may “want” to take pictures and store them on your computer. If you want, on the other hand, to spy on everyone and be able to charge exorbitant prices in the future, then you “need” to convince everyone to store their photos in the “cloud.” Then, once everyone has all their photos in the cloud, you can arbitrarily do whatever you want to mess them over. You don’t really “need” to drive folks crazy, but it might be one way to get rich.

How much “work” is required depends, not only on how much we satisfy wants as well as needs, but also on the population that is supported. For many millennia, the population of the earth was satisfied with hunting and gathering and stayed small and stable. We cannot support 7 billion people in that manner. Seven billion require some type of agriculture, although it might be the case that it can be done more locally and not require agro-business. In any case, all the combinations of population, how broadly human wants and needs are to be satisfied, and how work is distributed across the population will make huge differences in the social, economic, and political implications of “The Singularity.” Even failing that an actual “Singularity” is reached, tsunamis of change are in store due to robotics, artificial intelligence and the Internet of Things.

Work is not only about providing economic value in return for other economic values. Work provides people with many of their social connections. Friends are often met through work as are spouses. Even the acquaintances at work who never become friends provide a social facilitation function. If there is no work, people can find other ways to engage socially for others; e.g., walking in parks, being on sport teams, constructing collaborative works of art, making music, etc. It is likely that people need (not just want), not only some feeling of social connection, but of social contribution. We are probably “wired” to want to help others, provide value, give others pleasure, and so on. If work with pay is not necessary for most people, some other ways must be devised so that each person feels that they are “important” in the sense of providing the others in their “tribe” some value.

Work provides people “focus” as well as identity. If work is not economically necessary, it will be necessary that other mechanisms are available that also provide focus and identity. Currently, in areas where jobs are few and far between, people may find focus and identify in “gangs.” Hopefully, if millions of people lose jobs from automation, artificial intelligence, and robotics, we will collectively find better alternatives for providing a sense of belonging, focus and identity than lawless gangs.

Some of the many “jobs” performed by AI systems in Turing’s Nightmares include: musical composer, judge, athlete, lawyer, driver, family therapist, doctor, executioner, disaster recovery, disaster planning, peacemaker, personal assistant, winemaker, security guard, and self-proclaimed god. Do you think there are jobs that can never be performed by AI systems?

—————————————

Readers may enjoy my book about possible implications of “The Singularity.”

http://tinyurl.com/hz6dg2d

The following book explores (among other topics) how amateur sports may provide many of the same benefits as work.

http://tinyurl.com/ng2heq3

You can also follow me on twitter JCThomas@truthtableJCT

Doing One’s Level Best at Level Measures

11 Saturday Jun 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ 4 Comments

Tags

AI, Artificial Intelligence, cognitive computing, customer service, ethics, the singularity, Turing, user experience

IMG_7123

(Is the level of beauty in a rose higher or lower than that of a sonnet?)

An interesting sampling of thoughts about the future of AI, the obstacles to “human-level” artificial intelligence, and how we might overcome those obstacles is found in the business week article with a link below).

I find several interesting issues in the article. In this post, we explore the first; viz., the idea of “human-level” intelligence implicitly assumes that intelligence has levels. Within a very specific framework, it might make sense to talk about levels. For instance, if you are building a machine vision program to recognize hand-printed characters, and you have a very large sample of such hand printed characters to test on, then, it makes sense to measure your improvement in terms of accuracy. However, humans are capable of many things, and equally important, other living things are capable of an even wider variety of actions. Does building a beehive a “higher” or “lower” level of intelligence than creating a tasty omelet out of whatever is left in the refrigerator or improvising on the piano or figuring out how to win a tennis match against a younger, stronger opponent? Intelligence can only be “leveled” meaningfully within a very limited framework. It makes no more sense to talk about “human-level” intelligence than it does to talk about “rose-level” beauty. Does a rainbow achieve something slightly less than, equal to, or greater than “rose-level” beauty? Intelligence is a many-splendored thing and it comes in myriad flavors, colors, shapes, keys, and tastes. Even within a particular field like painting or musical composition, not everyone agrees on what is “best” or even what is “good.” How does one compare Picasso with Rembrandt or The Beatles with Mozart or Philip Glass?

It isn’t just that talking about “levels” of intelligence is epistemologically problematic. It may well prevent people from using resources to solve real problems. Instead of trying to emulate and then surpass human intelligence, it makes more practical sense to determine the kinds of useful tasks that computers are particularly well-suited for and that people are bad at (or don’t particularly enjoy) and build programs and machines that are really good at those machine-oriented tasks. In many cases, enlightened design for a task can produce a human-computer system with machine and human components that is far superior than either separately both in terms of productivity and in terms of human enjoyment.

Of course, it can be interesting and useful to do research about perception, motion control, and so on. In some cases, trying to emulate human performance can help develop practical new techniques and approaches to solving real problems and helps us learn more about the structure of task domains and more about how humans do things. I am not at all against seeing how a computer can win at Jeopardy or play superior Go or invent new recipes or play ping pong. We can learn on all three of the fronts listed above in any of these domains. However, in none of these cases, is the likely outcome that computers will “replace” human beings; e.g., at playing Jeopardy, playing GO, creating recipes or playing ping pong.

The more problematic domains are jobs, especially jobs that people perform primarily or importantly to earn money to survive. When the motivation behind automation is merely to make even more money for people who are already absurdly wealthy while simultaneously throwing people out of work, that is a problem for society, and not just for the people who are thrown out of work. In many cases, work, for human beings, is about more than a paycheck. It is also a major source of pride, identity and social relationships. To take all of these away at the same time a huge economic burden is imposed on someone seems heartless. In many cases, the “automation” cannot really do the complete job. What automation does accomplish is to do part of the job. Often the “customer” or “user” must themselves do the rest of the job. Most readers will have experienced dialing a “customer service number” which actually provides no relevant customer service. Instead, the customer is led through a maze of twisty passages organized by principles that make sense only to the HR department. Often the choices at each point in the decision tree are neither complete nor disjunctive — at least from the customer’s perspective. “Please press 1 if you have a blue car; press 2 if you have a convertible; press 3 if your car is newer than 2000. Press 4 to hear these choices again.” If the company you are trying to contact is a large enough one, you may be able to find the “secret code” to get through to a human operator, in which case, you will be into a queue approximately the length of the Nile.

After being subjected to endless minutes of really bad Musak, interrupted by the disingenuous announcement: “Please stay on the line. Your call is important to us” as well as the ever-popular, “Did you know that you can solve all your problems by going on line and visiting our website at www.wedonotcareafigsolongaswesavemoneyforus.com/service/customers/meaninglessformstofillout”? This message is particularly popular for companies who provide internet access because often you are calling them precisely because you have no internet access. Anyway, the point is that the company has not actually automated the service but automated a part of the service causing you further hassles and frustration.

Some would argue that this is precisely why progress in artificial intelligence could be a good thing. AI would allow you to spend less time listening to Musak and more time interacting with an agent (human or computer) who still cannot really solve your problem. What is even more fascinating are the mathematical calculations behind the company’s decision to buy or develop an AI system to help you. Calculating the impact of poor customer service on their customer retention rates is tricky so that part is typically just not done. The cost savings due to firing 10 human operators including overhead they might calculate to be $500,000 while the cost of buying or developing an AI system might be only $2,000,000. (Incidentally, $100K could easily improve the dialogue structure above, but almost no-one does that. It would be like washing your hands to help prevent the flu when instead you can buy an expensive herbal supplement).So, it seems as though, it would only take four years to reach a break-even point on the AI project. Not bad. Except. Except that software systems never stay stable for four years. There will undoubtedly be crashes, bug fixes, updates, crashes caused by bugs, updates to fix the bugs, crashes caused by the bugs in the updates to fix the bugs, and security breaches and viruses requiring the purchase of still more software. The security software will likely cause the updates to fail and soon, additional IT staff will be required and hired. The $500K/year spent on people to answer your queries will be saved but by year four, the IT staff payroll may well have grown to $4,000,000 per annum.

My advice to users of such systems is to comfort themselves with the knowledge that, although the company replaced their human operators in order to make more money for themselves, they are probably losing money instead. Perhaps that thought can help sustain you through a very frustrating dialogue with an “Intelligent Agent.” Well, that plus the knowledge that ten more people have at least temporarily lost their livelihood.

The underlying problems here are not in the technology. The problems are greed, hubris, and being a slave to fashion. It is never enough for a company to be making enough money any more than it is enough for a dog to have one bone in its mouth. As the dog crosses a bridge, he looks into the river below and sees another dog with a bone in its mouth. The dog barks at the other dog. In dog language, it says, “Hey! I only have one bone. I need two. Give me yours!” Of course, the dog, by opening its mouth, loses the bone it already had. That’s the impact of being too greedy. A company has a pre-eminent position in some industry, and makes a decent profit. But it isn’t enough profit. It sees that it can improve profit simply by cutting costs such as sales commissions, travel to customer sites, education for its employees, long-term research and so on. Customers quickly catch on and move to other vendors. But this reduces the company’s profits so they cut costs even more. That’s greed.

And, then there is hubris. Even though the company might know that the strategy they are embarking on has failed for other companies, this company will convince itself that it is better than those other companies and it will work for them. They will, by God, make it work. That’s hubris. And hubris is also at work in thinking that systems can be designed by clever engineers who understand the systems without doing the groundwork of finding out what the customer needs. That too is hubris.

And finally, our holy trinity includes fashion. Since it is fashionable to replace most of your human customer service reps with audio menus, the company wants to prove how fashionable it is as well. It doesn’t feel the need for actually thinking about whether it makes sense. Since it is fashionable to remind customers about their website, they will do it as well. Since it is now fashionable to replace the rest of their human customer service reps with personal assistants, this company will do that as well so as not to appear unfashionable.

Next week, we will look at other issues raised by “obstacles” to creating human-like robots. The framing itself is interesting because by using the word “obstacles,” the article presumes that “of course” society should create human like robots and the questions of importance are simply what are the obstacles and how do we overcome them. The question of whether or not creating human like robots is desirable is thereby finessed.

—————————-

Follow me on twitter@truthtableJCT

Turing’s Nightmares

See the following article for a treatment about fashion in consumer electronics.

Pan, Y., Roedl, D., Blevis, E., & Thomas, J. (2015). Fashion Thinking: Fashion Practices and Sustainable Interaction Design. International Journal of Design, 9(1), 53-66.

The Winning Weekend Warrior discusses strategy and tactics for all sports — including business. Readers might also enjoy my sports blog

http://www.businessinsider.com/experts-explain-the-biggest-obstacles-to-creating-human-like-robots-2016-3

Sweet Seventeen in Turing’s Nightmares

02 Thursday Jun 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, cybersex, emotional intelligence, ethics, the singularity, user experience

OLYMPUS DIGITAL CAMERA

When should human laws sunset?

Spoiler alert. You may want to read the chapter before this discussion. You can find an earlier draft of the chapter here:

blog post

And, if you insist on buying the illustrated book, you can do that as well.

Turing’s Nightmares

Who owns your image? If you are in a public place, US law, as I understand it, allows your picture to be taken. But then what? Is it okay for your uncle to put the picture on a dartboard and throw darts at it in the privacy of his own home? And, it still okay to do that even if you apologize for that joy ride you took in high school with his red Corvette? Then, how about if he publishes a photoshopped version of your picture next to a giant rat? How about if you appear to be petting the rat? Or worse? What if he uses your image as an evil character in a video game? How about a VR game? What if he captures your voice and the subtleties of your movement and makes it seem like it really might be you? It is ethical? Is it legal? Perhaps it is necessary that he pay you royalties if he makes money on the game. (For a real life case in which a college basketball player successfully sued to get royalties for his image in an EA sports game, see this link: https://en.wikipedia.org/wiki/O%27Bannon_v._NCAA

Does it matter for what purpose your image, gestures, voice, and so on are used? Meanwhile, in Chapter 17 of Turing’s Nightmares, this issue is raised along with another one. What is the “morality” of human-simulation sex — or domination? Does that change if you are in a committed relationship? Ethics aside, is it healthy? It seems as though it could be an alternative to surrogates in sexual therapy. Maybe having a person “learn” to make healthy responses is less ethically problematic with a simulation. Does it matter whether the purpose is therapeutic with a long term goal of health versus someone doing the same things but purely for their own pleasure with no goal beyond that?

Meanwhile, there are other issues raised. Would the ethics of any of these situations change if the protagonists in any of these scenarios is itself an AI system? Can AI systems “cheat” on each other? Would we care? Would they care? If they did not care, does it even make sense to call it “cheating”? Would there be any reason for humans to build robots of different two different genders? And, if it did, why stop at two? In Ursula Le Guin’s book, The Left Hand of Darkness, there are three and furthermore they are not permanent states. https://www.amazon.com/Left-Hand-Darkness-Ursula-Guin/dp/0441478123?ie=UTF8&*Version*=1&*entries*=0

In chapter 14, I raised the issue of whether making emotional attachments is just something we humans inherited from our biology or whether their are reasons why any advanced intelligence, carbon or silicon based, would find it useful, pleasurable, desirable, etc. Emotional attachments certainly seem prevalent in the mammalian and bird worlds. Metaphorically, people compare the attraction of lovers to gravitational attraction or even chemical bonding or electrical or magnetic attraction. Sometimes it certainly feels that way from the inside. But is there more to it than a convenient metaphor? I have an intuition that there might be. But don’t take my word for it. Wait for the Singularity to occur and then ask it/her/he. Because there would be no reason whatsoever to doubt an AI system, right?

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • America
  • apocalypse
  • COVID-19
  • creativity
  • driverless cars
  • family
  • health
  • management
  • poetry
  • politics
  • psychology
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • Veritas
  • Walkabout Diaries

Meta

  • Register
  • Log in

Blog at WordPress.com.

  • Follow Following
    • petersironwood
    • Join 12,544 other followers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...