• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: ethics

Rules and Standards nearly Dead? 

04 Sunday Sep 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, law, speeding, the singularity, Turing

funnysign

Ever get a speeding ticket that you thought was “silly”? I certainly have. On one occasion, when I was in graduate school in Ann Arbor, I drove by a police car parked in a gas station. It was a 35 mph zone. I looked over at the police car and looked down to check my speed. Thirty-five mph. No problem. Or, so I thought. I drove on and noticed that a few seconds later, the police officer turned his car on to the same road and began following me perhaps 1/4 to 1/2 mile behind me. He quickly zoomed up and turned on his flashing light to pull me over. He claimed he had followed me and I was going 50 mph. I was going 35. I kept checking because I saw the police car in my mirror. Now, it is quite possible that the police car was traveling 50, because he caught up with me very quickly. I explained this to no avail.

The University of Michigan at that time in the late 60’s was pretty liberal but was situated in a fairly conservative, some might say “redneck”, area of Michigan. There were many clashes between students and police. I am pretty certain that the only reason I got a ticket was that I was young and sporting a beard and therefore “must be” a liberal anti-war protester. I got the ticket because of bias.

Many years later, in 1988, I was driving north from New York to Boston on Interstate 84. This particular section of road is three lanes on both sides. It was a nice clear day and the pavement was dry as well as being dead straight with no hills. The shoulders and margins near the shoulders were clear. The speed limit was 55 mph but I was going 70. Given the state of my car, the conditions and the extremely sparse traffic, as well as my own mental and physical state, I felt perfectly safe driving 70. I got a ticket. In this case, I really was breaking the law. Technically. But I still felt it was a bit unjustified. There was no way that even a deer or rabbit, let alone a runaway child could come out of hiding and get to the highway without my seeing them in time to slow down, stop, or avoid them. Years earlier I had been on a similar stretch of road in Eastern Montana and at that time there was no speed limit. Still, rules are rules. At least for now.

“The Death of Rules and Standards” by Anthony J. Casey and Anthony Niblett suggests that advances in artificial intelligence may someday soon replace rules and standards with “micro-directives” tuned to the specifics of time and circumstance which will provide the benefits of rules without the cost of either. “…we suggest…a larger trend toward context specific laws that can adapt to any situation.” This is an interesting thesis and exploring it helps shine some light on what AI likely can and cannot do as well as making us question why we humans have categories and rules at all. Perhaps AI systems could replace human bias and general laws that seem to impose unnecessary restrictions in particular circumstances.

The first quibble with their argument is that no computer, however powerful, could possibly cover all situations. Taken literally, this would require a complete and accurate theory of physics as well as human behavior as well as a knowledge of the position and state of every particle in the universe. Not even post-singularity AI will likely be able to accomplish this. I hedge with the word “likely” because it is theoretically possible that a sufficiently smart AI will uncover some “hidden pattern” that shows that our universe which seems so vast and random can in fact be predicted in detail by a small set of laws that do not depend on details. In this fantasy future, there is no “true” randomness or chaos or butterfly effect.

Fantasies aside, the first issue that must be dealt with for micro-directives to be reasonable would be to have a good set of “equivalence classes” and/or to partition away differences that do not make a difference. The position of the moons of Jupiter shouldn’t make any difference as to whether a speeding ticket should be given or whether a killing is justified. Spatial proximity alone allows us as humans to greatly diminish the number of factors that need to be considered in deciding whether or not a give action is required, permissible, poor, or illegal. If I had gone to court about the speeding ticket on I-84, I might have mentioned the conditions of the roadway and its surroundings immediately ahead. I would not have mentioned anything whatever about the weather or road conditions anywhere else on the planet as being relevant to the safety of the situation. (Notice though, that it did seem reasonable to me, and possibly to you, to mention that very similar conditions many years earlier in Montana gave rise to no speed limit at all.) This gives us a hint that what is relevant or not relevant to a given situation is non-trivially determined. In fact, the “energy crisis” of the early 70’s gave rise to the National Maximum Speed Law as part of the 1974 Federal Emergency Highway Energy Conservation Act. This enacted, among other things, a federal law limiting the speed limit to 55 mph. A New York Times article by Robert A. Hamilton cites a study done of compliance on Connecticut Interstates in 1988 showing that 85% of the drivers violated the 55 mph speed limit!

So,not only would I not received a ticket in Montana in 1972 for driving under similar conditions;  I also would not have gotten a ticket on that same exact stretch of highway for going 70 in 1972 or in 1996. And, in the year I actually got that ticket, 85% of the drivers were also breaking the speed limit. The impetus for the 1974 law was that it was supposed to reduce demand for oil; however, advocates were quick to point out that it should also improve safety. Despite several studies on both of these factors, it is still unclear how much, if any, oil was actually saved and it is also unclear what the impact on safety was. It seems logical that slower speeds should save lives. However, people may go out of their way to get to an Interstate if they can drive much faster on it. So some traffic during the 55 limit would stay on less safe rural roads. In addition, falling asleep while driving is not recommended. Driving a long trip at 70 gets you off the road earlier and perhaps before dusk while driving at 55 will keep you on the road longer and possibly in the dark. In addition, lowering the speed limit, to the extent there is any compliance does not just impact driving; it could also impact productivity. Time spent on the road is (hopefully) not time working for most people. One reason it is difficult to measure empirically the impact of slower speeds on safety is that other things were happening as well. Cars have had a number of features to make them safer over time and seat belt usage has gone up as well. They have also become more fuel efficient. Computers, even very “smart” computers are not “magic.” They cannot completely differentiate cause and effect from naturally occurring data. For that, humans or computers have to do expensive, costly, and ethically problematic field experiments.

Of course, what is true about something as simple as enforcing speed limits is equally or more problematic in other areas where one might be tempted to utilize micro-directives in place of laws. Sticking to speeding laws, micro-directives could “adjust” to conditions and avoid biases based on gender, race, and age, but they could also take into account many more factors. Should the allowable speed, for instance, be based on income? (After all a person making $250K per year is losing more money by driving more slowly than one making $25K/year). How about the reaction time of the driver? How about whether or not they are listening to the radio? As I drive, I don’t like using cruise control. I change my speed continually depending on the amount of traffic, whether or not someone in the nearby area appears to be driving erratically, how much visibility I have, how closely someone is following me and how close I have to be to the car in front and so on. Should all of these be taken into account in deciding whether or not to give a ticket? Is it “fair” for someone with extremely good vision and reaction times to be allowed to drive faster than someone with moderate vision and slow reaction times? How would people react to any such personalized micro-directives?

While the speed ticket situation is complex and could be fraught with emotion, what about other cases such as abortion? Some people feel that abortion should never be legal under any circumstances and others feel it is always the woman’s choice. Many people, however, feel that it is only justified under certain circumstances. But what are those circumstances in detail? And, even if the AI system takes into account 1000 variables to reach a “wise” decision, how would the rules and decisions be communicated?

Would an AI system be able to communicate in such a way as to personalize the manner of presentation for the specific person in the specific circumstances to warn them that they are about to break a micro-directive? In order to be “fair”, one could argue that the system should be equally able to prevent everyone from breaking a micro-directive. But some people are more unpredictable than others. What if, in order to make it so person A is 98% likely to follow the micro-directive, the AI system presents a soundtrack of a screaming child but in order to make person B 98% likely to follow the micro-directive, it only whispers a warning. Now, person B ignores the micro-directive and speeds (which would happen according to the premise 2% of the time). Wouldn’t person B, now be likely to object that if they had had the same warning, they would have not ignored the micro-directive? Conversely, person A might be so disconcerted by the warning that they end up in an accident.

Anyway, there is certainly no argument that our current system of using human judgement is prone to various kinds of conscious and unconscious biases. In addition, it also seems to be the case that any system of general laws ends up punishing people for what is actually “reasonable” behavior under the circumstances and ends up letting people off Scott-free when they do despicable things which are technically legal (absurdly rich people and corporations paying zero taxes comes to mind). Will driverless cars be followed by judge-less and jury-less courts?

Turing’s Nightmares

Abracadabra!

07 Sunday Aug 2016

Posted by petersironwood in apocalypse, The Singularity, Uncategorized

≈ 3 Comments

Tags

"Citizens United", AI, Artificial Intelligence, biotech, cognitive computing, emotional intelligence, ethics, the singularity, Turing

IMG_7241.JPG

Abracadabra! Here’s the thing. There is no magic. Of course, there is the magic of love and the wonder at the universe and so there is metaphorical magic. But there is no physical magic and no mathematical magic. Why do we care? Because in most science fiction scenarios, when super-intelligence happens, whether it is artificial or humanoid, magic happens. Not only can the super-intelligent person or computer think more deeply and broadly, they also can start predicting the future, making objects move with their thoughts alone and so on. Unfortunately, it is not just in science fiction that one finds such impossibilities but also in the pitches of companies about biotech and the future of artificial intelligence. Now, don’t get me wrong. Of course, there are many awesome things in store for humanity in the coming millennia, most of which we cannot even anticipate. But the chances of “free unlimited energy” and a computer that will anticipate and meet our every need are slim indeed.

This all-too popular exaggeration is not terribly surprising. I am sure much of what I do seems quite magical to our cats. People in possession of advanced or different technology often seem “magical” to those with no familiarity with the technology. But please keep in mind that making a human brain “better”, whether by making it bigger, or have more connections, or making it faster —- none of these alterations will enable the brain to move objects via psychokinesis. Yes, the brain does produce a minuscule amount of electricity, but way too little to move mountains or freight trains. Of course, machines can be theoretically be built to wield a lot of physical energy, but it isn’t the information processing part of the system that directly causes something in the physical world. It is through actuators of some type, just as it is with animals. Of course, super-intelligence could make the world more efficient. It is also possible that super-intelligence might discover as yet undiscovered forces of the universe. If it turns out that our understanding of reality is rather fundamentally flawed, then all bets are off. For example, if it turns out that there are twelve fundamental forces in the universe (or, just one), and a super-intelligent system determines how to use them, it might be possible that there is potential energy already stored in matter which can be released by the slightest “twist” in some other dimension or using some as yet undiscovered force. This might appear to human beings who have never known about the other 8 forces let alone how to harness them as “magic.”

There is another more subtle kind of “magic” that might be called mathematical magic. As known for a long time, it is theoretically possible to play perfect chess by calculating all possible moves, and all possible responses to those moves, etc. to the final draws and checkmates. It has been calculated such a calculation of contingencies would not be possible even if the entire universe were a nano-computer operating in parallel since the beginning of time. There are many similar domains. Just because a person or computer is way, way smarter does not mean they will be able to calculate every possibility in a highly complex domain.

Of course, it is also possible that some domains might appear impossibly complex but actually be governed by a few simple, but extremely difficult to discover laws. For instance, it might turn out that one can calculate the precise value of a chess position (encapsulating all possible moves implicitly) through some as yet undiscovered algorithm written perhaps in an as yet undesigned language. It seems doubtful that this would be true of every domain, but it is hard to say a priori. 

There is another aspect of unpredictability and that has to do with random and chaotic effects. Imagine trying to describe every single molecule of earth’s seas and atmosphere in terms of it’s motion and position. Even if there were some way to predict state N+1 from N, we would have to know everything about state N. The effects of the slightest miscalculation of missing piece of data could be amplified over time. So long term predictions of fundamentally chaotic systems like weather, or what your kids will be up to in 50 years, or what the stock market will be in 2600  are most likely impossible, not because our systems are not intelligent enough but because such systems are by their nature not predictable. In the short term, weather is largely, though not entirely, predictable. The same holds for what your kids will do tomorrow or, within limits, what the stock market will do. The long term predictions are quite different.

In The Sciences of the Artificial, Herb Simon provides a nice thought experiment about the temperature in various regions of a closed space. I am paraphrasing, but imagine a dormitory with four “quads.” Each quad has four rooms and each room is partitioned into four areas with screens. The screens are not very good insulators so if the temperature in these areas differ, they will quickly converge. In the longer run, the temperature will tend toward average in the entire quad. In the very long term, if no additional energy is added, the entire dormitory will tend toward the global average. So, when it comes to many kinds of interactions, nearby interactions dominate, but in the long term, more global forces come into play.

Now, let us take Simon’s simple example and consider what might happen in the real world. We want to predict what the temperature is in a particular partitioned area in 100 years. In reality, the dormitory is not a closed system. Someone may buy a space heater and continually keep their little area much warmer. Or, maybe that area has a window that faces south. But it gets worse. Much worse. We have no idea whether the dormitory will even exist in 100 years. It depends on fires, earthquakes, and the generosity of alumni. In fact, we don’t even know whether brick and mortar colleges will exist in 100 years. Because as we try to predict in longer and longer time frames, not only do more distant factors come into play in terms of physical distance. The determining factors are also distant conceptually. In a 100 year time frame, the entire college may or may not exist and we don’t even know whether the determining factor(s) will be financial, astronomical, geological, political, social, physical or what. This is not a problem that will be solved via “Artificial Intelligence” or by giving human beings “better brains” via biotech.

Whoa! Hold on there. Once again, it is possible that in some other dimension or using some other as yet undiscovered force, there is a law of conservation so that going “off track” in one direction causes forces to correct the imbalance and get back on track. It seems extremely unlikely, but it is conceivable that our model of how the universe works is missing some fundamental organizing principle and what appears to us as chaotic is actually not.

The scary part, at least to me, is that some descriptions of the wonderful world that awaits us (once our biotech or AI start-up is funded) is that that wonderful world depends on their being a much simpler, as yet unknown force or set of forces that is discoverable and completely unanticipated. Color me “doubting Thomas” on that one.

It isn’t just that investing in such a venture might be risky in terms of losing money. It is that we humans are subject to blind pride that makes people presume that they can predict what the impact of making a genetic change will be, not just on a particular species in the short term, but on the entire planet in the long run! We can indeed make small changes in both biotech and AI and see improvements in our lives. But when it comes to recreating dinosaurs in a real life Jurassic Park or replacing human psychotherapists with robotic ones, we really cannot predict what the net effect will be. As humans, we are certainly capable of containing and testing and imagining possibilities and slowly testing them as we introduce them. Yeah. That could happen. But…

What seems to actually happen is that companies not only want to make more money; they want to make more money now. We have evolved social and legal and political systems that put almost no brakes on runaway greed. The result is that more than one drug has been put on the market that has had a net negative effect on human health. This is partly because long term effects are very hard to ascertain, but the bigger cause is unbridled greed. Corporations, like horses, are powerful things. You can ride farther and faster on a horse. And certainly corporations are powerful agents of change. But the wise rider is master or partner with a horse. They don’t allow themselves to be dragged along the ground by rope and let the horse go wherever it will. Sadly, that is precisely the position that society is vis a vis corporations. We let them determine the laws. We let them buy elections. We let them control virtually every news medium. We no longer use them to get amazing things done. We let them use us to get done what they want done. And what is that thing that they want done? Make hugely more money for a very few people. Despite this, most companies still manage to do a lot of net good in the world. I suspect this is because human beings are still needed for virtually every vital function in the corporation.

What will happen once the people in a corporation are no longer needed? What will happen when people who remain in a corporation are no longer people as we know them, but biologically altered? It is impossible to predict with certainty. But we can assume that it will seem to us very much like magic.

 

 

 

 

Very.

Dark.

Magic.

Abracadabra!

Turing’s Nightmares

Photo by Nikolay Ivanov on Pexels.com

Old Enough to Know Less

19 Tuesday Jul 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, ethics, machine learning, prejudice, the singularity, Turing

IMG_7308

Old Enough to Know Less?

There are many themes in Chapter 18 of Turing’s Nightmares. Let us begin with a major theme that is actually meant as practical advice for building artificial intelligence. I believe that an AI system that interacts well with human beings will need to move around in physical space and social space. Whether or not such a system will end up actually experiencing human emotions is probably unknowable. I suspect it will only be able to understand, simulate, and manipulate such emotions. I believe that the substance of which something is made typically has deep implications for what it is. In this case, the fact that we human beings are based on a billion years of evolution and are made of living cells has implications about how we experience the world. However, here we are addressing a much less philosophical and more practical issue. Moving around and interacting facilitates learning.

I first discussed this in an appendix to my dissertation. In that, I compared human behavior in a problem solving task to the behavior of an early and influential AI system modestly titled, “The General Problem Solver.” In studying problem solving, I came across two interesting findings that seemed somewhat contradictory. On the one hand, Grand Master chess players had outstanding memory for “real” chess positions (i.e., ones taken from real high level games). On the other hand, think-aloud studies of Grand Masters showed that they re-examined positions that they had already been to earlier in their thinking. My hypothesis was that Grand Masters examined one part of a game tree; examined another part of the game tree and in so doing, updated their general evaluation functions with a slightly altered copy that learned from the exploration so that their evaluation function for this particular position was tuned to this particular position. 

Our movements though space, in particular, provide us with a huge number of examples from which to learn about vision, sound, touch, kinesthetics, smell and their relationships. What we see, for instance, when we walk, is not a random sequence of images (unlike TV commercials!), but ones that have very particular and useful properties. As we approach objects, we most typically get more and more detailed images of those objects. This allows a constant tuning process for our being able to recognize things at a distance and with minimal cues.

An analogous case could be made for getting to know people. We make inferences and assumptions about people initially based on very little information. Over time, if we get to know them better, we have the opportunity to find out more about them. This potentially allows us (or a really smart robot) to learn to “read” people better over time. But it does not always work out that way. Because of the ambiguities of interpreting human actions and motives as well as the longer time delays, learning more about people is not guaranteed as it is with visual stimuli. If a person begins interacting with people who are predefined to be in a “bad” category, experience with that person may be looked at through such a heavy filter that people never change their minds despite what an outside observer might perceive as overwhelming evidence. If a man believes all people who wear hats are “stupid” and “prone to violence” he may dismiss a smart, peaceful person who wears a hat as “the exception that proves the rule” or say, “Well, he doesn’t always wear hats” or “The hats he wears are made by non-hat wearers and that makes him seem peaceful and intelligent.” The continued misperceptions, over-generalizations, and prejudices partly continue because they also form a framework for rationalizing greed and unfairness. It’s “okay” to steal from people who wear hats because, after all, they are basically stupid and prone to violence.

Unfortunately, when it comes to the potential for humans to learn about each other, there are a few people who actually prey on and amplify the unenlightened aspects of human nature because they themselves gain power, wealth, and popularity by doing so. They say, in effect, “All the problems you are experiencing — they are not your fault! They are because of the people with hats!” It’s a ridiculous presumption, but it often works. Would intelligent robots be prone to the same kinds of manipulations? Perhaps. It probably depends, not on a wheelbarrow filled with rainwater, but on how it is initially programmed. I suspect that an “intelligent agent” or “personal assistant” would be better off if it could take a balanced view of its experience rather than one top-down directed by pre-programmed prejudice. In this regard, creators of AI systems (as well as everyone else) would do well to employ the “Iroquois Rule of SIx.” What this rule claims (taken from the work of Paula Underwood) is that when you observe a person’s actions, it is normal to immediately form a hypothesis about why they are doing what they do. Before you act, however, you should typically generate five additional hypotheses about why they do as they do. Try to gather evidence about these hypotheses.

If prejudice and bigotry are allowed to flourish as an “acceptable political position” it can lead to the erosion of peace, prosperity and democracy. This is especially dangerous in a country as diverse as the USA. Once negative emotions about others are accepted as fine and dandy, prejudice and bigotry can become institutionalized. For example, in the Jim Crow South, not only were many if not most individual “Whites” themselves prejudiced; it became illegal even for those unprejudiced whites to sit at the same counters, use the same restrooms, etc. People could literally be thrown in jail simply for being rational. In Nazi Germany, not only were Jews subject to genocide; German non-Jewish citizens could be prosecuted for aiding them; in other words, for doing something human and humane. Once such a system became law with an insane dictator at the helm, millions of lives were lost in “fixing” this. Of course, even having the Allies win World War II did not bring back the six million Jews who were killed. The Germans were very close to developing the atomic bomb before the USA. Had they developed such a bomb in time, with an egomaniacal dictator at the helm, would they have used it to impose such hate of Jews, Gypsies, Homosexuals, people who were differently abled on everyone? Of course they would have. And then, what would have happened once all the “misfits” were eliminated? You guessed it. Another group would have been targeted. Because getting rid of all the misfits would not bring the promised peace and prosperity. It never has. It never will. By its very nature, it never could.

Artificial Intelligence is already a useful tool. It could continue to evolve in even more useful and powerful directions. But, how does that potential for a powerful amplifier of human desire play out if it falls into the hands of a nation with atomic weapons? How does that play out if that nation is headed up by an egomaniac who plays on the very worst of human nature in order to consolidate power and wealth? Will robots be programmed to be “open-minded” and learn for themselves who should be corrected, punished, imprisoned, eliminated? Or will they become tools to eliminate ever-larger groups of the “other” until no-one is left but the man on the hill, the man in the high castle? Is this the way we want the trajectory of primate evolution to end? Or do we find within ourselves, each of us, that more enlightened seed to plant. Could AI instead help us finally overcome prejudice and bigotry by letting us understand more fully the beauty of the spectrum of what it means to be human?

—————————————-

More about Turing’s Nightmares can be found here.Author Page on Amazon

Deconstructing the job-based economy. 

29 Wednesday Jun 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, automation, cognitive computing, ethics, jobs, sports, the singularity, Turing

IMG_6925

Recently, various economists, business leaders, and twitterists have opined about the net result of artificial intelligence and robotics on jobs. Of course , no-one can really predict the future. (And, that will remain true, even should a “hyper-intelligent AI system” evolve). The discussion does raise interesting points about the nature of work and what a society might be like if only a small fraction of people are “required” to work in order to meet the economic needs of the population.

As one tries to be precise, it becomes necessary to be a little clearer about what is meant by “work”, “the economic needs” and “the population.” For example, at one extreme, one can imagine a society that requires nearly everyone to work, but only between the ages of 30-50 and only for a few hours a week. This would allow the “work” to be spread widely through the population. Or, one could imagine “work” in which everyone and not just a few researchers and academics, would be encouraged to spend at least 50% of their time continuing to monitor and improve their performance, take courses, do actual research, take the time to communicate with users, etc. Alternatively, one could imagine a society in which only 1/10 to 1/3 of the people worked while others did not work at all. In still another version, rather than have long-term jobs, people have a way of posting needs for very small, self-contained tasks, and people choose ones that they want in return for credits which can be used for various luxuries.

When we speak of economic “needs,” we might do well to distinguish between “needs” and “wants” although these are not absolutely well-defined categories. We need nutrition and have no need for refined sugar, but to most people, it tastes good so we may well “want” it. We can imagine, that at one extreme, the economy produces enough of some bland substance like “Soylent Green” to provide everyone’s nutritional needs but no-one ever gets a gourmet meal (or even a burger with fries). It gets rather fuzzier when we discuss “contingent needs.” No-one “needs” a computer after all in order to live. However, if you “must” do a job, you may well “need” a computer to do that job. If you want to live a full life, you may “want” to take pictures and store them on your computer. If you want, on the other hand, to spy on everyone and be able to charge exorbitant prices in the future, then you “need” to convince everyone to store their photos in the “cloud.” Then, once everyone has all their photos in the cloud, you can arbitrarily do whatever you want to mess them over. You don’t really “need” to drive folks crazy, but it might be one way to get rich.

How much “work” is required depends, not only on how much we satisfy wants as well as needs, but also on the population that is supported. For many millennia, the population of the earth was satisfied with hunting and gathering and stayed small and stable. We cannot support 7 billion people in that manner. Seven billion require some type of agriculture, although it might be the case that it can be done more locally and not require agro-business. In any case, all the combinations of population, how broadly human wants and needs are to be satisfied, and how work is distributed across the population will make huge differences in the social, economic, and political implications of “The Singularity.” Even failing that an actual “Singularity” is reached, tsunamis of change are in store due to robotics, artificial intelligence and the Internet of Things.

Work is not only about providing economic value in return for other economic values. Work provides people with many of their social connections. Friends are often met through work as are spouses. Even the acquaintances at work who never become friends provide a social facilitation function. If there is no work, people can find other ways to engage socially for others; e.g., walking in parks, being on sport teams, constructing collaborative works of art, making music, etc. It is likely that people need (not just want), not only some feeling of social connection, but of social contribution. We are probably “wired” to want to help others, provide value, give others pleasure, and so on. If work with pay is not necessary for most people, some other ways must be devised so that each person feels that they are “important” in the sense of providing the others in their “tribe” some value.

Work provides people “focus” as well as identity. If work is not economically necessary, it will be necessary that other mechanisms are available that also provide focus and identity. Currently, in areas where jobs are few and far between, people may find focus and identify in “gangs.” Hopefully, if millions of people lose jobs from automation, artificial intelligence, and robotics, we will collectively find better alternatives for providing a sense of belonging, focus and identity than lawless gangs.

Some of the many “jobs” performed by AI systems in Turing’s Nightmares include: musical composer, judge, athlete, lawyer, driver, family therapist, doctor, executioner, disaster recovery, disaster planning, peacemaker, personal assistant, winemaker, security guard, and self-proclaimed god. Do you think there are jobs that can never be performed by AI systems?

—————————————

Readers may enjoy my book about possible implications of “The Singularity.”

http://tinyurl.com/hz6dg2d

The following book explores (among other topics) how amateur sports may provide many of the same benefits as work.

http://tinyurl.com/ng2heq3

You can also follow me on twitter JCThomas@truthtableJCT

Doing One’s Level Best at Level Measures

11 Saturday Jun 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ 4 Comments

Tags

AI, Artificial Intelligence, cognitive computing, customer service, ethics, the singularity, Turing, user experience

IMG_7123

(Is the level of beauty in a rose higher or lower than that of a sonnet?)

An interesting sampling of thoughts about the future of AI, the obstacles to “human-level” artificial intelligence, and how we might overcome those obstacles is found in the business week article with a link below).

I find several interesting issues in the article. In this post, we explore the first; viz., the idea of “human-level” intelligence implicitly assumes that intelligence has levels. Within a very specific framework, it might make sense to talk about levels. For instance, if you are building a machine vision program to recognize hand-printed characters, and you have a very large sample of such hand printed characters to test on, then, it makes sense to measure your improvement in terms of accuracy. However, humans are capable of many things, and equally important, other living things are capable of an even wider variety of actions. Does building a beehive a “higher” or “lower” level of intelligence than creating a tasty omelet out of whatever is left in the refrigerator or improvising on the piano or figuring out how to win a tennis match against a younger, stronger opponent? Intelligence can only be “leveled” meaningfully within a very limited framework. It makes no more sense to talk about “human-level” intelligence than it does to talk about “rose-level” beauty. Does a rainbow achieve something slightly less than, equal to, or greater than “rose-level” beauty? Intelligence is a many-splendored thing and it comes in myriad flavors, colors, shapes, keys, and tastes. Even within a particular field like painting or musical composition, not everyone agrees on what is “best” or even what is “good.” How does one compare Picasso with Rembrandt or The Beatles with Mozart or Philip Glass?

It isn’t just that talking about “levels” of intelligence is epistemologically problematic. It may well prevent people from using resources to solve real problems. Instead of trying to emulate and then surpass human intelligence, it makes more practical sense to determine the kinds of useful tasks that computers are particularly well-suited for and that people are bad at (or don’t particularly enjoy) and build programs and machines that are really good at those machine-oriented tasks. In many cases, enlightened design for a task can produce a human-computer system with machine and human components that is far superior than either separately both in terms of productivity and in terms of human enjoyment.

Of course, it can be interesting and useful to do research about perception, motion control, and so on. In some cases, trying to emulate human performance can help develop practical new techniques and approaches to solving real problems and helps us learn more about the structure of task domains and more about how humans do things. I am not at all against seeing how a computer can win at Jeopardy or play superior Go or invent new recipes or play ping pong. We can learn on all three of the fronts listed above in any of these domains. However, in none of these cases, is the likely outcome that computers will “replace” human beings; e.g., at playing Jeopardy, playing GO, creating recipes or playing ping pong.

The more problematic domains are jobs, especially jobs that people perform primarily or importantly to earn money to survive. When the motivation behind automation is merely to make even more money for people who are already absurdly wealthy while simultaneously throwing people out of work, that is a problem for society, and not just for the people who are thrown out of work. In many cases, work, for human beings, is about more than a paycheck. It is also a major source of pride, identity and social relationships. To take all of these away at the same time a huge economic burden is imposed on someone seems heartless. In many cases, the “automation” cannot really do the complete job. What automation does accomplish is to do part of the job. Often the “customer” or “user” must themselves do the rest of the job. Most readers will have experienced dialing a “customer service number” which actually provides no relevant customer service. Instead, the customer is led through a maze of twisty passages organized by principles that make sense only to the HR department. Often the choices at each point in the decision tree are neither complete nor disjunctive — at least from the customer’s perspective. “Please press 1 if you have a blue car; press 2 if you have a convertible; press 3 if your car is newer than 2000. Press 4 to hear these choices again.” If the company you are trying to contact is a large enough one, you may be able to find the “secret code” to get through to a human operator, in which case, you will be into a queue approximately the length of the Nile.

After being subjected to endless minutes of really bad Musak, interrupted by the disingenuous announcement: “Please stay on the line. Your call is important to us” as well as the ever-popular, “Did you know that you can solve all your problems by going on line and visiting our website at www.wedonotcareafigsolongaswesavemoneyforus.com/service/customers/meaninglessformstofillout”? This message is particularly popular for companies who provide internet access because often you are calling them precisely because you have no internet access. Anyway, the point is that the company has not actually automated the service but automated a part of the service causing you further hassles and frustration.

Some would argue that this is precisely why progress in artificial intelligence could be a good thing. AI would allow you to spend less time listening to Musak and more time interacting with an agent (human or computer) who still cannot really solve your problem. What is even more fascinating are the mathematical calculations behind the company’s decision to buy or develop an AI system to help you. Calculating the impact of poor customer service on their customer retention rates is tricky so that part is typically just not done. The cost savings due to firing 10 human operators including overhead they might calculate to be $500,000 while the cost of buying or developing an AI system might be only $2,000,000. (Incidentally, $100K could easily improve the dialogue structure above, but almost no-one does that. It would be like washing your hands to help prevent the flu when instead you can buy an expensive herbal supplement).So, it seems as though, it would only take four years to reach a break-even point on the AI project. Not bad. Except. Except that software systems never stay stable for four years. There will undoubtedly be crashes, bug fixes, updates, crashes caused by bugs, updates to fix the bugs, crashes caused by the bugs in the updates to fix the bugs, and security breaches and viruses requiring the purchase of still more software. The security software will likely cause the updates to fail and soon, additional IT staff will be required and hired. The $500K/year spent on people to answer your queries will be saved but by year four, the IT staff payroll may well have grown to $4,000,000 per annum.

My advice to users of such systems is to comfort themselves with the knowledge that, although the company replaced their human operators in order to make more money for themselves, they are probably losing money instead. Perhaps that thought can help sustain you through a very frustrating dialogue with an “Intelligent Agent.” Well, that plus the knowledge that ten more people have at least temporarily lost their livelihood.

The underlying problems here are not in the technology. The problems are greed, hubris, and being a slave to fashion. It is never enough for a company to be making enough money any more than it is enough for a dog to have one bone in its mouth. As the dog crosses a bridge, he looks into the river below and sees another dog with a bone in its mouth. The dog barks at the other dog. In dog language, it says, “Hey! I only have one bone. I need two. Give me yours!” Of course, the dog, by opening its mouth, loses the bone it already had. That’s the impact of being too greedy. A company has a pre-eminent position in some industry, and makes a decent profit. But it isn’t enough profit. It sees that it can improve profit simply by cutting costs such as sales commissions, travel to customer sites, education for its employees, long-term research and so on. Customers quickly catch on and move to other vendors. But this reduces the company’s profits so they cut costs even more. That’s greed.

And, then there is hubris. Even though the company might know that the strategy they are embarking on has failed for other companies, this company will convince itself that it is better than those other companies and it will work for them. They will, by God, make it work. That’s hubris. And hubris is also at work in thinking that systems can be designed by clever engineers who understand the systems without doing the groundwork of finding out what the customer needs. That too is hubris.

And finally, our holy trinity includes fashion. Since it is fashionable to replace most of your human customer service reps with audio menus, the company wants to prove how fashionable it is as well. It doesn’t feel the need for actually thinking about whether it makes sense. Since it is fashionable to remind customers about their website, they will do it as well. Since it is now fashionable to replace the rest of their human customer service reps with personal assistants, this company will do that as well so as not to appear unfashionable.

Next week, we will look at other issues raised by “obstacles” to creating human-like robots. The framing itself is interesting because by using the word “obstacles,” the article presumes that “of course” society should create human like robots and the questions of importance are simply what are the obstacles and how do we overcome them. The question of whether or not creating human like robots is desirable is thereby finessed.

—————————-

Follow me on twitter@truthtableJCT

Turing’s Nightmares

See the following article for a treatment about fashion in consumer electronics.

Pan, Y., Roedl, D., Blevis, E., & Thomas, J. (2015). Fashion Thinking: Fashion Practices and Sustainable Interaction Design. International Journal of Design, 9(1), 53-66.

The Winning Weekend Warrior discusses strategy and tactics for all sports — including business. Readers might also enjoy my sports blog

http://www.businessinsider.com/experts-explain-the-biggest-obstacles-to-creating-human-like-robots-2016-3

Sweet Seventeen in Turing’s Nightmares

02 Thursday Jun 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, cybersex, emotional intelligence, ethics, the singularity, user experience

OLYMPUS DIGITAL CAMERA

When should human laws sunset?

Spoiler alert. You may want to read the chapter before this discussion. You can find an earlier draft of the chapter here:

blog post

And, if you insist on buying the illustrated book, you can do that as well.

Turing’s Nightmares

Who owns your image? If you are in a public place, US law, as I understand it, allows your picture to be taken. But then what? Is it okay for your uncle to put the picture on a dartboard and throw darts at it in the privacy of his own home? And, it still okay to do that even if you apologize for that joy ride you took in high school with his red Corvette? Then, how about if he publishes a photoshopped version of your picture next to a giant rat? How about if you appear to be petting the rat? Or worse? What if he uses your image as an evil character in a video game? How about a VR game? What if he captures your voice and the subtleties of your movement and makes it seem like it really might be you? It is ethical? Is it legal? Perhaps it is necessary that he pay you royalties if he makes money on the game. (For a real life case in which a college basketball player successfully sued to get royalties for his image in an EA sports game, see this link: https://en.wikipedia.org/wiki/O%27Bannon_v._NCAA

Does it matter for what purpose your image, gestures, voice, and so on are used? Meanwhile, in Chapter 17 of Turing’s Nightmares, this issue is raised along with another one. What is the “morality” of human-simulation sex — or domination? Does that change if you are in a committed relationship? Ethics aside, is it healthy? It seems as though it could be an alternative to surrogates in sexual therapy. Maybe having a person “learn” to make healthy responses is less ethically problematic with a simulation. Does it matter whether the purpose is therapeutic with a long term goal of health versus someone doing the same things but purely for their own pleasure with no goal beyond that?

Meanwhile, there are other issues raised. Would the ethics of any of these situations change if the protagonists in any of these scenarios is itself an AI system? Can AI systems “cheat” on each other? Would we care? Would they care? If they did not care, does it even make sense to call it “cheating”? Would there be any reason for humans to build robots of different two different genders? And, if it did, why stop at two? In Ursula Le Guin’s book, The Left Hand of Darkness, there are three and furthermore they are not permanent states. https://www.amazon.com/Left-Hand-Darkness-Ursula-Guin/dp/0441478123?ie=UTF8&*Version*=1&*entries*=0

In chapter 14, I raised the issue of whether making emotional attachments is just something we humans inherited from our biology or whether their are reasons why any advanced intelligence, carbon or silicon based, would find it useful, pleasurable, desirable, etc. Emotional attachments certainly seem prevalent in the mammalian and bird worlds. Metaphorically, people compare the attraction of lovers to gravitational attraction or even chemical bonding or electrical or magnetic attraction. Sometimes it certainly feels that way from the inside. But is there more to it than a convenient metaphor? I have an intuition that there might be. But don’t take my word for it. Wait for the Singularity to occur and then ask it/her/he. Because there would be no reason whatsoever to doubt an AI system, right?

Turing’s Nightmares: Chapter 16

25 Wednesday May 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, ethics, the singularity, UX

WHO CAN TELL THE DANCER FROM THE DANCE?

MikeandStatue

Is it the same dance? Look familiar?

 

The title of chapter 16 is a slight paraphrase of the last line of William Butler Yeats poem, Among School Children. The actual last line is: “How can we tell the dancer from the dance?” Both phrasings tend to focus on the interesting problem of trying to separate process from product, personage from their creative works, calling into question whether it is even possible. In any case, the reason I chose this title is to highlight that when it comes to the impact of artificial intelligence (or, indeed, computer systems in general), a lot depends on who the actual developers are: their goals, their values, their constraints and contexts.

In the scenario of chapter 16, the boss (Ruslan) of one of the main developers (Goeffrey) insists on putting in a “back door.” What this means in this particular case is that someone with an axe to grind has a way to ensure that the AI system gives advice that causes people to behave in the best interests of those who have the key to this back door. Here, the implication is that some rich, wealthy oil magnates have “made” the AI system discredit the idea of global warming so as to maximize their short term profits. Of course, this is a work of fiction. In the real world, no-one would conceivably be evil enough to mortgage the human habitability of our planet for even more short term profit — certainly not someone already absurdly wealthy.

In the story, the protagonist, Goeffrey, is rather resentful of having this requirement for a back door laid on him. There is a hint that Geoffrey was hoping that the super-intelligent system would be objective. We can also assume it was added late but no additional time was added to the schedule. We can assume this because software development is seldom a purely rational process. If it were, software would actually work; it would be useful and usable. It would not make you want to smash your laptop against the wall. Geoffrey is also afraid that the added requirement might make the project fail. Anyway, Geoffrey doesn’t take long to hit on the idea that if he can engineer a back door for his bosses, he can add another one for his own uses. At that point, he no longer seems worried about the ethical implications.

There is another important idea in the chapter and it actually has nothing to do with artificial intelligence, per se, though it certainly could be used as a persuasive tool by AI systems. So, rather than have a single super-intelligent being (which people might understandably have doubts about trusting), instead, there are two “Sings” and they argue with each other. These arguments reveal something about the reasoning and facts behind the two positions.Perhaps more importantly, a position is much more believable when “someone” — in this case a super-intelligent someone — .is persuaded by arguments to change their position and “agree” with the other Sing.

The story does not go into the details of how Geoffrey used his own back door into the system to drive a wedge between his boss, Ruslan and Ruslan’s wife. People can be manipulated. Readers should design their own story about how an AI system could work its woe. We may imagine that the AI system has communication with a great many devices, actuators, and sensors in the Internet of Things.

You can obtain Turing’s Nightmares here: Turing’s Nightmares

You can read the “design rationale” for Turing’s Nightmares here: Design Rationale

 

Chapter 13: Turing’s Nightmares

17 Sunday Apr 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ 2 Comments

Tags

AI, Artificial Intelligence, cognitive computing, crime and punishment, ethics, the singularity

CRIME AND PUNISHMENT

PicturesfromiPhone2 033

Chapter 13 of Turing’s Nightmares concerns itself with issues of crime and punishment. Our current system of criminal justice has evolved over thousands of years. Like everything else about modern life, it is based on a set of assumptions. While accurate DNA testing (and other modern technologies) have profoundly impacted the criminal justice system, super-intelligence and ubiquitous sensors and computing could well have even more profound impacts.

We often talk of punishment as being what is “deserved” for the crime. But we cannot change the past. It seems highly unlikely that even a super-intelligent computer system will be able to change the past. The real reason for punishment is to change to future. In Medieval Europe, a person who stole bread might well be hanged in the town square. One reason for meting out punishment in a formal system, then as well as now, is to prevent informal and personal retribution which could easily spiral out of control and destroy the very fabric of society. A second rationale is the prevention of future crime by the punished person. If they are hanged, they cannot commit that (or any other) crime. The reason for hanging people publicly was to discourage others from committing similar crimes.

Today’s society may appear slightly more “merciful” in that first time offenders for some crimes may get off with a warning. Even for repeated or serious crimes, the burden of proof is on the prosecution and a person is deemed “innocent until proven guilty” under US law. I see three reasons for this bias. First, there is often a paucity of data about what happened. Eye witness accounts still count for a lot, but studies suggest that eye witnesses are often quite unreliable and that their “memory” for events is clouded by how questions are framed. For instance, studies by Elizabeth Loftus and others demonstrate that people shown a car crash on film and asked to estimate how fast the cars were going when they bumped into each other will estimate a much slower speed than if asked how fast the cars were going when they crashed into each other. Computers, sensors, and video surveillance are becoming more and more prevalent. At some point, juries, if they still exist, may well be watching crimes as recorded, not reconstructing them from scanty evidence.

A second reason for assuming evidence is the impact of bias. This is also why there is a jury of twelve people and why potential jurors can be dismissed ahead of time “for cause.” If crimes are judged, not by a jury of peers, but by a super-intelligent computer system, it might be assumed that such systems will not have the same kinds of biases as human judges and juries. (Of course, that assumption is not necessarily valid and is a theme reflected in many chapters of Turing’s Nightmares), and hence the topic of other blog posts.

A third reason for showing “mercy” and making conviction difficult is that predicting future human behavior is difficult. Advances in psychological modeling already make it possible to predict behavior much better than we could a few decades ago, under very controlled conditions. But we can easily imagine that a super-intelligent system may be able to predict with a fair degree of accuracy whether a person who committed a crime in the past will commit one in the future.

In chapter 13, the convicted criminal is given “one last chance” to show that they are teachable. The reader may well question whether a “test” is a valid part of criminal justice. This has often been the case in the not so distant past. Many of those earlier “trials by fire” were based on superstition, but today, we humans can and have designed tests that predict future behavior to a limited degree. Tests help determine whether someone is granted admission to a college, medical school, law school, or business school. Often the tests are only moderately predictive. For instance, the SAT test only correlates with college performance about .4 which means it predicts a mere 16% of the variance. From the standpoint of the individual, the score is not really much use. From the standpoint of the college administration however, 16% can make the test very worthwhile. It may well be the case that a super-intelligent computer system could do a much better job of constructing a test to determine whether a criminal is likely to commit other crimes.

One could imagine that if a computer can predict human behavior that well, then it should be able to “cure” any hardened criminal. However, even a super-intelligent computer will presumably not be able to defy the laws of physics. It will not be able to position the planet Jupiter safely in orbit a quarter million miles from earth in order to allow us to view a spectacular night sky. Since people form closed systems of thought, it may be equally impossible to cure everyone of criminal behavior, even for super-intelligent systems. People maintain false belief systems in the face of overwhelming evidence to the contrary. Indeed, the “trial by fire” that Brain faces is essentially a test to see whether he is or is not open to change based on evidence. Sadly, he is not.

Another theme of chapter 13 is that Brain’s trial by fire is televised. This is hardly far-fetched. Not only are (normal) trials televised today; so-called “reality TV shows” put people in all sorts of difficult situations. What might be perceived as a high level of cruelty in having people watch Brain fail his test is already present in much of what is available on commercial television. At least in the case of the hypothetical trial of Brain, there is a societal benefit in that it could reduce the chances for others to follow in Brain’s footsteps.

We only see hints of Brain’s crime, which apparently involves elder fraud. As people are capable of living longer, and as overwhelming greed has moved from the “sin” to the “virtue” column in modern American society, we can expect elder fraud to increase as well, at least for a time. With increasing surveillance, however, we might eventually see an end to it.

Of course, the name “Brain” was chosen because, in a sense, our own intelligence as a species — our own brain — is being put on trial. Are we capable of adapting quickly enough to prevent ourselves from being the cause of our own demise? And, just as the character Brain is too “closed” to make the necessary adaptations to stay alive, despite the evidence he is presented with, so too does humanity at large seem to be making the same kinds of mistakes over and over (prejudice, war, rabble-rousing, blaming others, assigning power to those with money, funneling the most money to those whose only “talent” consists of controlling the flow of money and power, etc.) We seem to have gained some degree of insight, but meanwhile, have developed numerous types of extremely effective weapons: biological, chemical, and atomic. Will super-intelligence be another such weapon? Or will it be instead, used in the service of preventing us from destroying each other?

Link to chapter 13 in this blog

Turing’s Nightmares (print version on Amazon)

Turing’s Nightmares: Seven

13 Sunday Mar 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, competition, cooperation, ethics, the singularity, Turing

Axes to Grind.

finalpanel1

Why the obsession with building a smarter machine? Of course, there are particular areas where being “smarter” really means being able to come up with more efficient solutions. Better logistics means you can deliver items to more people more quickly with fewer mistakes and with a lower carbon footprint. That seems good. Building a better Chess player or a better Go player might have small practical benefit, but it provides a nice objective benchmark for developing methods that are useful in other domains as well. But is smarter the only goal of artificial intelligence?

What would or could it mean to build a more “ethical” machine? Can a machine even have ethics? What about building a nicer machine or a wiser machine or a more enlightened one? These are all related concepts but somewhat different. A wiser machine, to take one example, might be a system that not only solves problems that are given to it more quickly. It might also mean that it looks for different ways to formulate the problem; it looks for the “question behind the question” or even looks for problems. Problem formulation and problem finding are two essential skills that are seldom even taught in schools for humans. What about the prospect of machines that do this? If its intelligence is very different from ours, it may seek out, formulate, and solve problems that are hard for us to fathom.

For example, outside my window is a hummingbird who appears to be searching the stone pine for something. It is completely unclear to me what he is searching for. There are plenty of flowers that the hummingbirds like and many are in bloom right now. Surely they have no trouble finding these. Recall that a hummingbird has an incredibly fast metabolism and needs to spend a lot of energy finding food. Yet, this one spent five minutes unsuccessfully scanning the stone pine for … ? Dead straw to build a nest? A mate? A place to hide? A very wise machine with freedom to choose problems may well pick problems to solve for which we cannot divine the motivation. Then what?

In this chapter, one of the major programmers decides to “insure” that the AI system has the motivation and means to protect itself. Protection. Isn’t this the major and main rationalization for most of the evil and aggression in the world? Perhaps a super intelligent machine would be able to manipulate us into making sure it was protected. It might not need violence. On the other hand, from the machine’s perspective, it might be a lot simpler to use violence and move on to more important items on its agenda.

This chapter also raises issues about the relationship between intelligence and ethics. Are intelligent people, even on average, more ethical? Intelligence certainly allows people to make more elaborate rationalizations for their unethical behavior. But does it correlate with good or evil? Lack of intelligence or education may sometimes lead people to do harmful things unknowingly. But lots of intelligence and education may sometimes lead people to do harmful things knowingly — but with an excellent rationalization. Is that better?

Even highly intelligent people may yet have significant blind spots and errors in logic. Would we expect that highly intelligent machines would have none? In the scenario in chapter seven, the presumably intelligent John makes two egregious and overt errors in logic. First, he says that if we don’t know how to do something, it’s a meaningless goal. Second, he claims (essentially) that if empathy is not sufficient for ethical behavior, then it cannot be part of ethical behavior. Both are logically flawed positions. But the third and most telling “error” John is making is implicit — that he is not trying to dialogue with Don to solve some thorny problems. Rather, he is using his “intelligence” to try to win the argument. John already has his mind made up that intelligence is the ultimate goal and he has no intention of jointly revisiting this goal with his colleague. Because, at least in the US, we live in a hyper-competitive society where even dancing and cooking and dating have been turned into competitive sports, most people use their intelligence to win better, not to cooperate better. 

If humanity can learn to cooperate better, perhaps with the help of intelligent computer agents, we can probably solve most of the most pressing problems we have even without super-intelligent machines. Will this happen? I don’t know. Could this happen? Yes. Unfortunately, Roger is not on board with that program toward better cooperation and in this scenario, he has apparently ensured the AI’s capacity for “self-preservation through violent action” without consulting his colleagues ahead of time. We can speculate that he was afraid that they might try to prevent him from doing so either by talking him out of it or appealing to a higher authority. But Roger imagined he “knew better” and only told them when it was a fait accompli. So it goes.

Turing’s Nightmares

Turing’s Nightmares: Six

10 Thursday Mar 2016

Posted by petersironwood in sports, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, sports, Turing

volleyballvictory

Human Beings are Interested in Human Limits.

A Google AI system just won its second victory over the human Go champion. Does this mean that people will lose interest in Go? I don’t think so. It may eventually mean that human players will learn faster and that top-level human play will increase. Nor, will robot athletes supplant human athletes any time soon.

Athletics provides an excellent way for people to get and stay fit, become part of a community, and fight depression and anxiety. Watching humans vie in athletic endeavors helps us understand the limits of what people can do. This is something that our genetic endowment has wisely made fascinating. To a lesser extent, we are also interested in seeing how fast a horse can run, or how fast a hawk can dive or how complex a routine a dog can learn.

In Chapter 6 of “Turing’s Nightmares” I briefly explore a world where robotic competitors have replaced human ones. In this hypothetical world, the super-intelligent computers also find that sports is an excellent venue for learning more about the world. And, so it is! In “The Winning Weekend Warrior”, I provide many examples of how strategies and tactics useful in the sports world are also useful in business and in life. (There are also some important exceptions that are worth noting. In sports, you play within the rules. In life, you can play with some of the rules.)

Chapter 6 also brings up two controversial points that ethicists and sports enthusiasts should be discussing now. First, sensors are becoming so small, powerful, accurate, and lightweight that is possible to embed them in virtually any piece of sports equipment(e.g., tennis racquets). Few people would call it unethical to include such sensors as training devices. However, very soon, these might also provide useful information during play. What about that? Suppose that you could wear a device that not only enhanced your sensory abilities but also your motor abilities? To some extent, the design of golf clubs and tennis racquets and swimsuits are already doing this. Is there a limit to what would or should be tolerated? Should any device be banned? What about corrective lenses? What about sunglasses? Should all athletes have to compete nude? What about athletes who have to take “performance enhancing” drugs just to stay healthy? Sharapova’s recent case is just one. What about the athlete of the future who has undergone stem cell therapy to regrow a torn muscle or ligament? Suppose a major league baseball pitcher tears a tendon and it is replaced with a synthetic tendon that allows a faster fast ball?

With the ever-growing power of computers and the collection of more and more data, big data analytics makes it possible for the computer to detect patterns of play that a human player or coach would be unlikely to perceive. Suppose a computer system is able to detect reliable “cues” that tip off what pitch a pitcher is likely to throw or whether a tennis player is about to hit down the tee or out wide? Novak Djokovic and Ted Williams were born with exceptional visual acuity. This means that they can pick out small visual details more quickly than their opponents and react to a serve or curve more quickly. But it also means that they are more likely to pick up subtle tip-offs in their opponents motion that give away their intentions ahead of time. Would we object if a computer program analyzed thousands of serves by Roger Federer or Andy Murray in order to detect patterns of tip-offs and then that information was used to help train Djokovic to learn to “read” the service motions of his opponents? Of course, this does not just apply to tennis. It applies to reading a football play option, a basketball pick, the signals of baseline coaches, and so on.

Instead of teaching Novak Djokovic these patterns ahead of time, suppose he were to have a device implanted in his back that received radio signals from a supercomputer able to “read” where the serve were going a split second ahead of time and it was this signal that allowed Novak to anticipate better?

I do not know the “correct” ethical answer for all of these dilemmas. To me, it is most important to be open and honest about what is happening. So, if Lance Armstrong wants to use performance enhancing drugs, perhaps that is okay if and only if everyone else in the race knows that and has the opportunity to take the same drugs and if everyone watching knows it as well. Similarly, although I would prefer that tennis players only use IT for training, I would not be dead set against real time aids if the public knows. I suspect that most fans (like me) would prefer their athletes “un-enhanced” by drugs or electronics. Personally, I don’t have an issue with using any medical technology to enhance the healing process. How do others feel? And what about athletes who “need” something like asthma medication in order to breathe but it has a side-effect of enhancing performance?

Would the advent of robotic tennis players, baseball players or football players reduce our enjoyment of watching people in these sports? I think it might be interesting to watch robots in these sports for a time, but it would not be interesting for a lifetime. Only human athletes would provide on-going interest. What do you think?

Readers of this blog may also enjoy “Turing’s Nightmares” and “The Winning Weekend Warrior.” John Thomas’s author page on Amazon

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 661 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...