• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: Turing

Abracadabra!

07 Sunday Aug 2016

Posted by petersironwood in apocalypse, The Singularity, Uncategorized

≈ 3 Comments

Tags

"Citizens United", AI, Artificial Intelligence, biotech, cognitive computing, emotional intelligence, ethics, the singularity, Turing

IMG_7241.JPG

Abracadabra! Here’s the thing. There is no magic. Of course, there is the magic of love and the wonder at the universe and so there is metaphorical magic. But there is no physical magic and no mathematical magic. Why do we care? Because in most science fiction scenarios, when super-intelligence happens, whether it is artificial or humanoid, magic happens. Not only can the super-intelligent person or computer think more deeply and broadly, they also can start predicting the future, making objects move with their thoughts alone and so on. Unfortunately, it is not just in science fiction that one finds such impossibilities but also in the pitches of companies about biotech and the future of artificial intelligence. Now, don’t get me wrong. Of course, there are many awesome things in store for humanity in the coming millennia, most of which we cannot even anticipate. But the chances of “free unlimited energy” and a computer that will anticipate and meet our every need are slim indeed.

This all-too popular exaggeration is not terribly surprising. I am sure much of what I do seems quite magical to our cats. People in possession of advanced or different technology often seem “magical” to those with no familiarity with the technology. But please keep in mind that making a human brain “better”, whether by making it bigger, or have more connections, or making it faster —- none of these alterations will enable the brain to move objects via psychokinesis. Yes, the brain does produce a minuscule amount of electricity, but way too little to move mountains or freight trains. Of course, machines can be theoretically be built to wield a lot of physical energy, but it isn’t the information processing part of the system that directly causes something in the physical world. It is through actuators of some type, just as it is with animals. Of course, super-intelligence could make the world more efficient. It is also possible that super-intelligence might discover as yet undiscovered forces of the universe. If it turns out that our understanding of reality is rather fundamentally flawed, then all bets are off. For example, if it turns out that there are twelve fundamental forces in the universe (or, just one), and a super-intelligent system determines how to use them, it might be possible that there is potential energy already stored in matter which can be released by the slightest “twist” in some other dimension or using some as yet undiscovered force. This might appear to human beings who have never known about the other 8 forces let alone how to harness them as “magic.”

There is another more subtle kind of “magic” that might be called mathematical magic. As known for a long time, it is theoretically possible to play perfect chess by calculating all possible moves, and all possible responses to those moves, etc. to the final draws and checkmates. It has been calculated such a calculation of contingencies would not be possible even if the entire universe were a nano-computer operating in parallel since the beginning of time. There are many similar domains. Just because a person or computer is way, way smarter does not mean they will be able to calculate every possibility in a highly complex domain.

Of course, it is also possible that some domains might appear impossibly complex but actually be governed by a few simple, but extremely difficult to discover laws. For instance, it might turn out that one can calculate the precise value of a chess position (encapsulating all possible moves implicitly) through some as yet undiscovered algorithm written perhaps in an as yet undesigned language. It seems doubtful that this would be true of every domain, but it is hard to say a priori. 

There is another aspect of unpredictability and that has to do with random and chaotic effects. Imagine trying to describe every single molecule of earth’s seas and atmosphere in terms of it’s motion and position. Even if there were some way to predict state N+1 from N, we would have to know everything about state N. The effects of the slightest miscalculation of missing piece of data could be amplified over time. So long term predictions of fundamentally chaotic systems like weather, or what your kids will be up to in 50 years, or what the stock market will be in 2600  are most likely impossible, not because our systems are not intelligent enough but because such systems are by their nature not predictable. In the short term, weather is largely, though not entirely, predictable. The same holds for what your kids will do tomorrow or, within limits, what the stock market will do. The long term predictions are quite different.

In The Sciences of the Artificial, Herb Simon provides a nice thought experiment about the temperature in various regions of a closed space. I am paraphrasing, but imagine a dormitory with four “quads.” Each quad has four rooms and each room is partitioned into four areas with screens. The screens are not very good insulators so if the temperature in these areas differ, they will quickly converge. In the longer run, the temperature will tend toward average in the entire quad. In the very long term, if no additional energy is added, the entire dormitory will tend toward the global average. So, when it comes to many kinds of interactions, nearby interactions dominate, but in the long term, more global forces come into play.

Now, let us take Simon’s simple example and consider what might happen in the real world. We want to predict what the temperature is in a particular partitioned area in 100 years. In reality, the dormitory is not a closed system. Someone may buy a space heater and continually keep their little area much warmer. Or, maybe that area has a window that faces south. But it gets worse. Much worse. We have no idea whether the dormitory will even exist in 100 years. It depends on fires, earthquakes, and the generosity of alumni. In fact, we don’t even know whether brick and mortar colleges will exist in 100 years. Because as we try to predict in longer and longer time frames, not only do more distant factors come into play in terms of physical distance. The determining factors are also distant conceptually. In a 100 year time frame, the entire college may or may not exist and we don’t even know whether the determining factor(s) will be financial, astronomical, geological, political, social, physical or what. This is not a problem that will be solved via “Artificial Intelligence” or by giving human beings “better brains” via biotech.

Whoa! Hold on there. Once again, it is possible that in some other dimension or using some other as yet undiscovered force, there is a law of conservation so that going “off track” in one direction causes forces to correct the imbalance and get back on track. It seems extremely unlikely, but it is conceivable that our model of how the universe works is missing some fundamental organizing principle and what appears to us as chaotic is actually not.

The scary part, at least to me, is that some descriptions of the wonderful world that awaits us (once our biotech or AI start-up is funded) is that that wonderful world depends on their being a much simpler, as yet unknown force or set of forces that is discoverable and completely unanticipated. Color me “doubting Thomas” on that one.

It isn’t just that investing in such a venture might be risky in terms of losing money. It is that we humans are subject to blind pride that makes people presume that they can predict what the impact of making a genetic change will be, not just on a particular species in the short term, but on the entire planet in the long run! We can indeed make small changes in both biotech and AI and see improvements in our lives. But when it comes to recreating dinosaurs in a real life Jurassic Park or replacing human psychotherapists with robotic ones, we really cannot predict what the net effect will be. As humans, we are certainly capable of containing and testing and imagining possibilities and slowly testing them as we introduce them. Yeah. That could happen. But…

What seems to actually happen is that companies not only want to make more money; they want to make more money now. We have evolved social and legal and political systems that put almost no brakes on runaway greed. The result is that more than one drug has been put on the market that has had a net negative effect on human health. This is partly because long term effects are very hard to ascertain, but the bigger cause is unbridled greed. Corporations, like horses, are powerful things. You can ride farther and faster on a horse. And certainly corporations are powerful agents of change. But the wise rider is master or partner with a horse. They don’t allow themselves to be dragged along the ground by rope and let the horse go wherever it will. Sadly, that is precisely the position that society is vis a vis corporations. We let them determine the laws. We let them buy elections. We let them control virtually every news medium. We no longer use them to get amazing things done. We let them use us to get done what they want done. And what is that thing that they want done? Make hugely more money for a very few people. Despite this, most companies still manage to do a lot of net good in the world. I suspect this is because human beings are still needed for virtually every vital function in the corporation.

What will happen once the people in a corporation are no longer needed? What will happen when people who remain in a corporation are no longer people as we know them, but biologically altered? It is impossible to predict with certainty. But we can assume that it will seem to us very much like magic.

 

 

 

 

Very.

Dark.

Magic.

Abracadabra!

Turing’s Nightmares

Photo by Nikolay Ivanov on Pexels.com

Old Enough to Know Less

19 Tuesday Jul 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, ethics, machine learning, prejudice, the singularity, Turing

IMG_7308

Old Enough to Know Less?

There are many themes in Chapter 18 of Turing’s Nightmares. Let us begin with a major theme that is actually meant as practical advice for building artificial intelligence. I believe that an AI system that interacts well with human beings will need to move around in physical space and social space. Whether or not such a system will end up actually experiencing human emotions is probably unknowable. I suspect it will only be able to understand, simulate, and manipulate such emotions. I believe that the substance of which something is made typically has deep implications for what it is. In this case, the fact that we human beings are based on a billion years of evolution and are made of living cells has implications about how we experience the world. However, here we are addressing a much less philosophical and more practical issue. Moving around and interacting facilitates learning.

I first discussed this in an appendix to my dissertation. In that, I compared human behavior in a problem solving task to the behavior of an early and influential AI system modestly titled, “The General Problem Solver.” In studying problem solving, I came across two interesting findings that seemed somewhat contradictory. On the one hand, Grand Master chess players had outstanding memory for “real” chess positions (i.e., ones taken from real high level games). On the other hand, think-aloud studies of Grand Masters showed that they re-examined positions that they had already been to earlier in their thinking. My hypothesis was that Grand Masters examined one part of a game tree; examined another part of the game tree and in so doing, updated their general evaluation functions with a slightly altered copy that learned from the exploration so that their evaluation function for this particular position was tuned to this particular position. 

Our movements though space, in particular, provide us with a huge number of examples from which to learn about vision, sound, touch, kinesthetics, smell and their relationships. What we see, for instance, when we walk, is not a random sequence of images (unlike TV commercials!), but ones that have very particular and useful properties. As we approach objects, we most typically get more and more detailed images of those objects. This allows a constant tuning process for our being able to recognize things at a distance and with minimal cues.

An analogous case could be made for getting to know people. We make inferences and assumptions about people initially based on very little information. Over time, if we get to know them better, we have the opportunity to find out more about them. This potentially allows us (or a really smart robot) to learn to “read” people better over time. But it does not always work out that way. Because of the ambiguities of interpreting human actions and motives as well as the longer time delays, learning more about people is not guaranteed as it is with visual stimuli. If a person begins interacting with people who are predefined to be in a “bad” category, experience with that person may be looked at through such a heavy filter that people never change their minds despite what an outside observer might perceive as overwhelming evidence. If a man believes all people who wear hats are “stupid” and “prone to violence” he may dismiss a smart, peaceful person who wears a hat as “the exception that proves the rule” or say, “Well, he doesn’t always wear hats” or “The hats he wears are made by non-hat wearers and that makes him seem peaceful and intelligent.” The continued misperceptions, over-generalizations, and prejudices partly continue because they also form a framework for rationalizing greed and unfairness. It’s “okay” to steal from people who wear hats because, after all, they are basically stupid and prone to violence.

Unfortunately, when it comes to the potential for humans to learn about each other, there are a few people who actually prey on and amplify the unenlightened aspects of human nature because they themselves gain power, wealth, and popularity by doing so. They say, in effect, “All the problems you are experiencing — they are not your fault! They are because of the people with hats!” It’s a ridiculous presumption, but it often works. Would intelligent robots be prone to the same kinds of manipulations? Perhaps. It probably depends, not on a wheelbarrow filled with rainwater, but on how it is initially programmed. I suspect that an “intelligent agent” or “personal assistant” would be better off if it could take a balanced view of its experience rather than one top-down directed by pre-programmed prejudice. In this regard, creators of AI systems (as well as everyone else) would do well to employ the “Iroquois Rule of SIx.” What this rule claims (taken from the work of Paula Underwood) is that when you observe a person’s actions, it is normal to immediately form a hypothesis about why they are doing what they do. Before you act, however, you should typically generate five additional hypotheses about why they do as they do. Try to gather evidence about these hypotheses.

If prejudice and bigotry are allowed to flourish as an “acceptable political position” it can lead to the erosion of peace, prosperity and democracy. This is especially dangerous in a country as diverse as the USA. Once negative emotions about others are accepted as fine and dandy, prejudice and bigotry can become institutionalized. For example, in the Jim Crow South, not only were many if not most individual “Whites” themselves prejudiced; it became illegal even for those unprejudiced whites to sit at the same counters, use the same restrooms, etc. People could literally be thrown in jail simply for being rational. In Nazi Germany, not only were Jews subject to genocide; German non-Jewish citizens could be prosecuted for aiding them; in other words, for doing something human and humane. Once such a system became law with an insane dictator at the helm, millions of lives were lost in “fixing” this. Of course, even having the Allies win World War II did not bring back the six million Jews who were killed. The Germans were very close to developing the atomic bomb before the USA. Had they developed such a bomb in time, with an egomaniacal dictator at the helm, would they have used it to impose such hate of Jews, Gypsies, Homosexuals, people who were differently abled on everyone? Of course they would have. And then, what would have happened once all the “misfits” were eliminated? You guessed it. Another group would have been targeted. Because getting rid of all the misfits would not bring the promised peace and prosperity. It never has. It never will. By its very nature, it never could.

Artificial Intelligence is already a useful tool. It could continue to evolve in even more useful and powerful directions. But, how does that potential for a powerful amplifier of human desire play out if it falls into the hands of a nation with atomic weapons? How does that play out if that nation is headed up by an egomaniac who plays on the very worst of human nature in order to consolidate power and wealth? Will robots be programmed to be “open-minded” and learn for themselves who should be corrected, punished, imprisoned, eliminated? Or will they become tools to eliminate ever-larger groups of the “other” until no-one is left but the man on the hill, the man in the high castle? Is this the way we want the trajectory of primate evolution to end? Or do we find within ourselves, each of us, that more enlightened seed to plant. Could AI instead help us finally overcome prejudice and bigotry by letting us understand more fully the beauty of the spectrum of what it means to be human?

—————————————-

More about Turing’s Nightmares can be found here.Author Page on Amazon

Deconstructing the job-based economy. 

29 Wednesday Jun 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, automation, cognitive computing, ethics, jobs, sports, the singularity, Turing

IMG_6925

Recently, various economists, business leaders, and twitterists have opined about the net result of artificial intelligence and robotics on jobs. Of course , no-one can really predict the future. (And, that will remain true, even should a “hyper-intelligent AI system” evolve). The discussion does raise interesting points about the nature of work and what a society might be like if only a small fraction of people are “required” to work in order to meet the economic needs of the population.

As one tries to be precise, it becomes necessary to be a little clearer about what is meant by “work”, “the economic needs” and “the population.” For example, at one extreme, one can imagine a society that requires nearly everyone to work, but only between the ages of 30-50 and only for a few hours a week. This would allow the “work” to be spread widely through the population. Or, one could imagine “work” in which everyone and not just a few researchers and academics, would be encouraged to spend at least 50% of their time continuing to monitor and improve their performance, take courses, do actual research, take the time to communicate with users, etc. Alternatively, one could imagine a society in which only 1/10 to 1/3 of the people worked while others did not work at all. In still another version, rather than have long-term jobs, people have a way of posting needs for very small, self-contained tasks, and people choose ones that they want in return for credits which can be used for various luxuries.

When we speak of economic “needs,” we might do well to distinguish between “needs” and “wants” although these are not absolutely well-defined categories. We need nutrition and have no need for refined sugar, but to most people, it tastes good so we may well “want” it. We can imagine, that at one extreme, the economy produces enough of some bland substance like “Soylent Green” to provide everyone’s nutritional needs but no-one ever gets a gourmet meal (or even a burger with fries). It gets rather fuzzier when we discuss “contingent needs.” No-one “needs” a computer after all in order to live. However, if you “must” do a job, you may well “need” a computer to do that job. If you want to live a full life, you may “want” to take pictures and store them on your computer. If you want, on the other hand, to spy on everyone and be able to charge exorbitant prices in the future, then you “need” to convince everyone to store their photos in the “cloud.” Then, once everyone has all their photos in the cloud, you can arbitrarily do whatever you want to mess them over. You don’t really “need” to drive folks crazy, but it might be one way to get rich.

How much “work” is required depends, not only on how much we satisfy wants as well as needs, but also on the population that is supported. For many millennia, the population of the earth was satisfied with hunting and gathering and stayed small and stable. We cannot support 7 billion people in that manner. Seven billion require some type of agriculture, although it might be the case that it can be done more locally and not require agro-business. In any case, all the combinations of population, how broadly human wants and needs are to be satisfied, and how work is distributed across the population will make huge differences in the social, economic, and political implications of “The Singularity.” Even failing that an actual “Singularity” is reached, tsunamis of change are in store due to robotics, artificial intelligence and the Internet of Things.

Work is not only about providing economic value in return for other economic values. Work provides people with many of their social connections. Friends are often met through work as are spouses. Even the acquaintances at work who never become friends provide a social facilitation function. If there is no work, people can find other ways to engage socially for others; e.g., walking in parks, being on sport teams, constructing collaborative works of art, making music, etc. It is likely that people need (not just want), not only some feeling of social connection, but of social contribution. We are probably “wired” to want to help others, provide value, give others pleasure, and so on. If work with pay is not necessary for most people, some other ways must be devised so that each person feels that they are “important” in the sense of providing the others in their “tribe” some value.

Work provides people “focus” as well as identity. If work is not economically necessary, it will be necessary that other mechanisms are available that also provide focus and identity. Currently, in areas where jobs are few and far between, people may find focus and identify in “gangs.” Hopefully, if millions of people lose jobs from automation, artificial intelligence, and robotics, we will collectively find better alternatives for providing a sense of belonging, focus and identity than lawless gangs.

Some of the many “jobs” performed by AI systems in Turing’s Nightmares include: musical composer, judge, athlete, lawyer, driver, family therapist, doctor, executioner, disaster recovery, disaster planning, peacemaker, personal assistant, winemaker, security guard, and self-proclaimed god. Do you think there are jobs that can never be performed by AI systems?

—————————————

Readers may enjoy my book about possible implications of “The Singularity.”

http://tinyurl.com/hz6dg2d

The following book explores (among other topics) how amateur sports may provide many of the same benefits as work.

http://tinyurl.com/ng2heq3

You can also follow me on twitter JCThomas@truthtableJCT

Doing One’s Level Best at Level Measures

11 Saturday Jun 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ 4 Comments

Tags

AI, Artificial Intelligence, cognitive computing, customer service, ethics, the singularity, Turing, user experience

IMG_7123

(Is the level of beauty in a rose higher or lower than that of a sonnet?)

An interesting sampling of thoughts about the future of AI, the obstacles to “human-level” artificial intelligence, and how we might overcome those obstacles is found in the business week article with a link below).

I find several interesting issues in the article. In this post, we explore the first; viz., the idea of “human-level” intelligence implicitly assumes that intelligence has levels. Within a very specific framework, it might make sense to talk about levels. For instance, if you are building a machine vision program to recognize hand-printed characters, and you have a very large sample of such hand printed characters to test on, then, it makes sense to measure your improvement in terms of accuracy. However, humans are capable of many things, and equally important, other living things are capable of an even wider variety of actions. Does building a beehive a “higher” or “lower” level of intelligence than creating a tasty omelet out of whatever is left in the refrigerator or improvising on the piano or figuring out how to win a tennis match against a younger, stronger opponent? Intelligence can only be “leveled” meaningfully within a very limited framework. It makes no more sense to talk about “human-level” intelligence than it does to talk about “rose-level” beauty. Does a rainbow achieve something slightly less than, equal to, or greater than “rose-level” beauty? Intelligence is a many-splendored thing and it comes in myriad flavors, colors, shapes, keys, and tastes. Even within a particular field like painting or musical composition, not everyone agrees on what is “best” or even what is “good.” How does one compare Picasso with Rembrandt or The Beatles with Mozart or Philip Glass?

It isn’t just that talking about “levels” of intelligence is epistemologically problematic. It may well prevent people from using resources to solve real problems. Instead of trying to emulate and then surpass human intelligence, it makes more practical sense to determine the kinds of useful tasks that computers are particularly well-suited for and that people are bad at (or don’t particularly enjoy) and build programs and machines that are really good at those machine-oriented tasks. In many cases, enlightened design for a task can produce a human-computer system with machine and human components that is far superior than either separately both in terms of productivity and in terms of human enjoyment.

Of course, it can be interesting and useful to do research about perception, motion control, and so on. In some cases, trying to emulate human performance can help develop practical new techniques and approaches to solving real problems and helps us learn more about the structure of task domains and more about how humans do things. I am not at all against seeing how a computer can win at Jeopardy or play superior Go or invent new recipes or play ping pong. We can learn on all three of the fronts listed above in any of these domains. However, in none of these cases, is the likely outcome that computers will “replace” human beings; e.g., at playing Jeopardy, playing GO, creating recipes or playing ping pong.

The more problematic domains are jobs, especially jobs that people perform primarily or importantly to earn money to survive. When the motivation behind automation is merely to make even more money for people who are already absurdly wealthy while simultaneously throwing people out of work, that is a problem for society, and not just for the people who are thrown out of work. In many cases, work, for human beings, is about more than a paycheck. It is also a major source of pride, identity and social relationships. To take all of these away at the same time a huge economic burden is imposed on someone seems heartless. In many cases, the “automation” cannot really do the complete job. What automation does accomplish is to do part of the job. Often the “customer” or “user” must themselves do the rest of the job. Most readers will have experienced dialing a “customer service number” which actually provides no relevant customer service. Instead, the customer is led through a maze of twisty passages organized by principles that make sense only to the HR department. Often the choices at each point in the decision tree are neither complete nor disjunctive — at least from the customer’s perspective. “Please press 1 if you have a blue car; press 2 if you have a convertible; press 3 if your car is newer than 2000. Press 4 to hear these choices again.” If the company you are trying to contact is a large enough one, you may be able to find the “secret code” to get through to a human operator, in which case, you will be into a queue approximately the length of the Nile.

After being subjected to endless minutes of really bad Musak, interrupted by the disingenuous announcement: “Please stay on the line. Your call is important to us” as well as the ever-popular, “Did you know that you can solve all your problems by going on line and visiting our website at www.wedonotcareafigsolongaswesavemoneyforus.com/service/customers/meaninglessformstofillout”? This message is particularly popular for companies who provide internet access because often you are calling them precisely because you have no internet access. Anyway, the point is that the company has not actually automated the service but automated a part of the service causing you further hassles and frustration.

Some would argue that this is precisely why progress in artificial intelligence could be a good thing. AI would allow you to spend less time listening to Musak and more time interacting with an agent (human or computer) who still cannot really solve your problem. What is even more fascinating are the mathematical calculations behind the company’s decision to buy or develop an AI system to help you. Calculating the impact of poor customer service on their customer retention rates is tricky so that part is typically just not done. The cost savings due to firing 10 human operators including overhead they might calculate to be $500,000 while the cost of buying or developing an AI system might be only $2,000,000. (Incidentally, $100K could easily improve the dialogue structure above, but almost no-one does that. It would be like washing your hands to help prevent the flu when instead you can buy an expensive herbal supplement).So, it seems as though, it would only take four years to reach a break-even point on the AI project. Not bad. Except. Except that software systems never stay stable for four years. There will undoubtedly be crashes, bug fixes, updates, crashes caused by bugs, updates to fix the bugs, crashes caused by the bugs in the updates to fix the bugs, and security breaches and viruses requiring the purchase of still more software. The security software will likely cause the updates to fail and soon, additional IT staff will be required and hired. The $500K/year spent on people to answer your queries will be saved but by year four, the IT staff payroll may well have grown to $4,000,000 per annum.

My advice to users of such systems is to comfort themselves with the knowledge that, although the company replaced their human operators in order to make more money for themselves, they are probably losing money instead. Perhaps that thought can help sustain you through a very frustrating dialogue with an “Intelligent Agent.” Well, that plus the knowledge that ten more people have at least temporarily lost their livelihood.

The underlying problems here are not in the technology. The problems are greed, hubris, and being a slave to fashion. It is never enough for a company to be making enough money any more than it is enough for a dog to have one bone in its mouth. As the dog crosses a bridge, he looks into the river below and sees another dog with a bone in its mouth. The dog barks at the other dog. In dog language, it says, “Hey! I only have one bone. I need two. Give me yours!” Of course, the dog, by opening its mouth, loses the bone it already had. That’s the impact of being too greedy. A company has a pre-eminent position in some industry, and makes a decent profit. But it isn’t enough profit. It sees that it can improve profit simply by cutting costs such as sales commissions, travel to customer sites, education for its employees, long-term research and so on. Customers quickly catch on and move to other vendors. But this reduces the company’s profits so they cut costs even more. That’s greed.

And, then there is hubris. Even though the company might know that the strategy they are embarking on has failed for other companies, this company will convince itself that it is better than those other companies and it will work for them. They will, by God, make it work. That’s hubris. And hubris is also at work in thinking that systems can be designed by clever engineers who understand the systems without doing the groundwork of finding out what the customer needs. That too is hubris.

And finally, our holy trinity includes fashion. Since it is fashionable to replace most of your human customer service reps with audio menus, the company wants to prove how fashionable it is as well. It doesn’t feel the need for actually thinking about whether it makes sense. Since it is fashionable to remind customers about their website, they will do it as well. Since it is now fashionable to replace the rest of their human customer service reps with personal assistants, this company will do that as well so as not to appear unfashionable.

Next week, we will look at other issues raised by “obstacles” to creating human-like robots. The framing itself is interesting because by using the word “obstacles,” the article presumes that “of course” society should create human like robots and the questions of importance are simply what are the obstacles and how do we overcome them. The question of whether or not creating human like robots is desirable is thereby finessed.

—————————-

Follow me on twitter@truthtableJCT

Turing’s Nightmares

See the following article for a treatment about fashion in consumer electronics.

Pan, Y., Roedl, D., Blevis, E., & Thomas, J. (2015). Fashion Thinking: Fashion Practices and Sustainable Interaction Design. International Journal of Design, 9(1), 53-66.

The Winning Weekend Warrior discusses strategy and tactics for all sports — including business. Readers might also enjoy my sports blog

http://www.businessinsider.com/experts-explain-the-biggest-obstacles-to-creating-human-like-robots-2016-3

Turing’s Nightmares: Chapter 15

16 Monday May 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, the singularity, Turing, Tutoring

Tutoring Intelligent Systems.

MikeandStatue

Learning by modeling; in this case by modeling something in the real world.

Of course, the title of the chapter is a take off on “Intelligent Tutoring Systems.” John Anderson of CMU developed (at least) a LISP tutor and a geometry tutor. In these systems, the computer is able to infer a “model” of the state of the student’s knowledge and then give instruction and examples that are geared toward the specific gaps or misconceptions that that particular student has. Individual human tutors can be much more effective than classroom instruction and John’s tutor’s were also better than human instruction. At the AI Lab at NYNEX, we worked for a time with John Anderson to develop a COBOL tutor. The tutoring system, called DIME, included a hierarchy of approaches. In addition to an “intelligent tutor”, there was a way for students to communicate with each other and to have a synchronous or asynchronous video chat with a human instructor. (This was described at CHI ’94 and available in the Proceedings; Radlinski, B., Atwood, M., and Villano, M., DIME: Distributed Intelligent Multimedia Education, Proceeding of CHI ’94 Conference Companion on Human Factors in Computing Systems,Pages 15-16 ACM New York, NY, USA ©1994).

The name “Alan” is used in the chapter to reflect some early work by Alan Collins, then at Bolt, Beranek and Newman, who studied and analyzed the dialogues of human tutors tutoring their tutees. It seems as though many AI systems either take the approach of trying to have human experts encode knowledge rather directly or expose them to many examples and let the systems learn on their own. Human beings often learn by being exposed to examples and having a guide, tutor, or coach help them focus, provide modeling, and chose the examples they are exposed to. One could think of IBM’s Watson for Jeopardy as something of a mixed model. Much of the learning was due to the vast texts that were read in and to being exposed to many Jeopardy game questions. But the team also provided a kind of guidance about how to fix problems as they were uncovered.

In chapter 15 of Turing’s Nightmares, we observe an AI system that seems at once brilliant and childish. The extrapolation from what the tutor actually said, presumably to encourage “Sing” to consider other possibilities about John and Alan was put together with another hint about the implications of being differently abled into the idea that there was no necessity for the AI system to limit itself to “human” emotions. Instead, the AI system “designs” emotional states in order to solve problems more effectively and efficiently. Indeed, in the example given, the AI system at first estimates it will take a long time to solve an international crisis. But once the Sing realizes that he can use a tailored set of emotional states for himself and for the humans he needs to communicate with, the problem becomes much simpler and quicker.

Indeed, it does sometimes feel as though people get stuck in some morass of habitual prejudices, in-group narratives, blame-casting, name-calling, etc. and are unable to think their way from their front door to the end of the block. Logically, it seems clear that war never benefits either “side” much (although to be sure, some powerful interests within each side might stand to gain power, money, etc.). One could hope that a really smart AI system might really help people see their way clear to find other solutions to problems.

.

The story ends with a refrain paraphrased from the TV series “West Wing” — “What comes next?” is meant to be reminiscent of “What’s Next?” which President Bartlett uses to focus attention on the next problem. “What comes next?” is also a phrase used in improv theater games; indeed, it is the name of an improv game used to gather suggestions from the audience about how to move the action along. In the context of the chapter, it is meant to convey that the Sing feels no need to bask in the glory of having avoided a war. Instead, it’s on to the next challenge or the next thing to learn. The phrase is also meant to invite the reader to think about what might come next after AI systems are able both to understand and utilize human emotion but also to invent their own emotional states on the fly based on the nature of the problem at hand. Indeed, what comes next?

Turing’s Nightmares: Chapter 14

02 Monday May 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, pets, the singularity, Turing

IMG_6576

 

Dear reader: Spoiler alert: before reading this blog post, you may want to read the associated chapter. You can buy the physical book, Turing’s Nightmares, at this link:

http://tinyurl.com/hz6dg2d

An earlier version of the chapter discussed below can be found at this link:

https://petersironwood.wordpress.com/2015/10/

One of the issues raised by chapter 14 of Turing’s Nightmares is that the scenario presumes that, even in the post-singularity future, there will still be a need for government. In particular, the future envisions individuals as well as a collective. Indeed, the goals of the “collective” will remain somewhat different from the goals of various individuals. Indeed, an argument can be made that the need for complex governmental processes and structures will increase with hyper-intelligence. But that argument will be saved for another time.

This scenario further assumes that advanced AI systems will have emotions and emotional attachments to other complex systems. What is the benefit of having emotional attachments? Some people may feel that emotional attachments are as outdated as the appendix; perhaps they had some function when humans lived in small tribes but now they cause as much difficulty as they confer an advantage. Even if you believe that emotional attachments are great for humans, you still might be puzzled why it could be advantageous for an AI system to have any.

When it comes to people, they vary a lot in their capabilities, habits, etc.. So, one reason emotional attachments “make sense” is to prefer and act in the interest of people who have a range of useful and complementary abilities and habitual behaviors. Wouldn’t you naturally start to like someone who has similar interests, other things being equal? Moreover, as you work with someone else toward a common goal, you begin to understand and learn how to work together better. You learn to trust each other and communicate in short-hand. If you become disconnected from such a person, it can be disconcerting for all sorts of reasons. But exactly the same could hold true for an autonomous agent with artificial intelligence. There could be reasons for having not one ubiquitous type of robot but for having millions of different kinds. Some of these would work well together and having them “bond” and differentially prefer their mutual proximity and interaction.

Humans, of course, also make emotional attachments, sometimes very deep, with animals. Most commonly, people form bonds with cats, dogs, and horses, but people have had a huge variety of pets including birds, turtles, snakes, ferrets, mice, rabbits and even tarantula spiders. What’s up with that? The discussion above about emotional attachment was intentionally “forced” and “cold”, because human attachments cannot be well explained in utilitarian terms. People love others who have no possible way to offer back any value other than their love in return.

In some cases, pets do have some utilitarian value such as catching mice, barking at intruders, or pulling hay wagons. But overwhelmingly, people love their pets because they love their pets! If asked, they may say because they are “cute” or “cuddly” but this doesn’t really answer the question as to why people love pets. According to a review by John Archer published in the 1997 July issue of Human Behavior, “These mechanisms can, in some circumstances, cause pet owners to derive more satisfaction from their pet relationship than those with humans, because they supply a type of unconditional relationship that is usually absent from those with other human beings.”

However, there are also other hypotheses; e.g., Biophilia (1986) Edward O. Wilson

http://www.amazon.com/Biophilia-Edward-Wilson/dp/0674074424 

suggests that during early hominid history, there was a distinct survival advantage to observing and remaining close to other animals living in nature. Would it make more sense to gravitate toward a habitat filled with life…or one utterly devoid of it? While humans and other animals generally want to move toward similar things like fresh water, a food supply, cover, reasonable temperatures, etc. and avoid other things such as dangerous places, temperature extremes etc. this might explain why people like lush and living environments but probably does not explain, in itself, why we actually love our pets.

Perhaps one among many possible reasons is that pets reflect aspects of our most basic natures. In civilization, these aspects are often hidden by social conventions. In effect, we can actually learn about how we ourselves are by observing and interacting with our pets. Among the various reasons why we love our pets, this strikes me as the most likely one to hold true as well for super-AI systems. Of course, they may also like cats and dogs for the same reason, but in the same way that most of us prefer cats and dogs over turtles and spiders because of the complexity and similarity of mammalian behavior, we can imagine that post-singularity AI systems might prefer human pets because we would be more complex and probably, at least initially, share many of the values, prejudices and interests of the AI systems since their initial programming would inevitably reflect humans.

Another premise of chapter 14 is that even with super-intelligent systems, resources will not be infinite. Many dystopian and utopian science fiction works alike seem to assume that in the future space travel, e.g., will be dirt cheap. That might happen. Ignoring any economic scarcity certainly makes writing more convenient. Realistically though, I see no reason why resources will be essentially infinite; that is, so universally cheap that there will no longer be any contention for them. It’s conceivable that some entirely new physical properties of the universe might be discovered by super-intelligent beings so that this will be the new reality. But it is also possible that “super-intelligent beings” might be even more inclined to over-use the resources of the planet than we humans are and that contention for resources will be even more fierce.

Increasing greediness seems at least an equally likely possibility as the alternative; viz., that while it might be true that as humans gained more and more power, they became greedier and greedier and used up more and more resources but only until that magic moment when machines were smarter than people and that at that point, these machines suddenly became interested in actually exhibiting sustainable behavior. Maybe, but why?

Any way, it’s getting late and past time to feed the six cats.

Interested readers who can may want to tune into a podcast tonight, Monday, May 2nd at 7pm PST using the link below. I will be interviewed about robotics, artificial intelligence and human computer interaction.

https://blab.im/nick-rishwain-roboticslive-ep-1-human-computer-interactions-w-john-charles-truthtablejc

Basically Unfair is Basically Unsafe

05 Tuesday Apr 2016

Posted by petersironwood in apocalypse, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, driverless cars, Robotics, the singularity, Turing

 

IMG_5572.JPG

In Chapter Eleven of Turing’s Nightmares, a family is attempting to escape from impending doom via a driverless car. The car operates by a set of complex rules, each of which seems quite reasonable in and of itself and under most circumstances. The net result however, is probably not quite what the designers envisioned. The underlying issue is not so much a problem with driverless cars, robotics or artificial intelligence. The underlying issue has more to do with the very tricky issue of separating problem from context. In designing any complex system, regardless of what technology is involved, people generally begin by taking some conditions as “given” and others as “things to be changed.” The complex problem is then separated into sub-problems. If each of the subproblems is well-solved, the implicit theory is that the overall problem will be solved as well. The tricky part is separating what we consider “problem” from “context” and separating the overall problem into relatively independent sub-problems.

Dave Snowden tells an interesting story from his days consulting for the National Water Service in the UK. The Water Service included in its employ engineers to fix problems and dispatchers who answered phones and dispatched engineers to fix those problems. Engineers were incented to solve problems while dispatchers were measured by how many calls they handled in a day. Most of the dispatchers were young but one of the older dispatchers was considerably slower than most. She only handled about half the number of calls she was “supposed to.” She was nearly fired. As it turned out, her husband was an engineer in the Water Service. She knew a lot and her phone calls ended up resulting in an engineer being dispatched about 1/1000 of the time while the “fast” dispatchers sent engineers to fix problems about 1/10 of the time. What was happening? Because the older employee knew a lot about the typical problems, she was actually solving many of them on the phone. She was saving her company a lot of money and was almost fired for it. Think about that. She was saving her company a lot of money and was almost fired for it.

In my dissertation, I compared the behavior of people solving a river-crossing problem to the behavior of the “General Problem Solver” — an early AI program developed by Shaw, Newell and Simon at Carnegie-Mellon University. One of the many differences was that people behave “opportunistically” compared with the General Problem Solver of the time. Although the original authors of GPS felt that its recursive nature was a feature, Quinlan and Hunt showed that there was a class of problems on which their non-recursive system (Fortran Deductive System) was superior.

Imagine, for example, that you wanted to read a new book (e.g., Turing’s Nightmare). In order to read the book, you will need to have the book so your sub-goal becomes to purchase the book; that is your goal. In order to meet that goal, you realize you will need to get $50 in cash. Now, getting $50 in cash becomes your goal. You decide that to meet that goal, you could volunteer to shovel the snow from your uncle’s driveway. On the way out the door, you mention your entire goal structure to your roommate because you need to borrow their car to drive to your uncle’s house. They say that they have already purchased the book and you are welcome to borrow it. The original GPS, at this point, would have solved the book reading problem by solving the book purchasing problem by solving the getting cash problem by going to your uncle’s house by borrowing your roommate’s car! You, on the other hand, like most individual human beings, would simply borrow your roommate’s copy and curl up in a nice warm easy chair to read the book. However, when people develop bureaucracies, whether business, academic, or governmental, these bureaucracies may well have spawned different departments, each with its own measures and goals. Such bureaucracies might well end up going through the whole chain in order to “solve the problem.”

Similarly, when groups of people design complex systems, the various parts of the system are generally designed and built by different groups of people. If these people are co-located, and if there is a high degree of trust, and if people are not micro-managed, and if there is time, space, and incentive for people to communicate even when it is not directly in the service of their own deadlines, the design group will tend to “do the right thing” and operate intelligently. To the extent, however, that companies have “cut the fat” and discourage “time-wasting” activities like socializing with co-workers and “saving money” by outsourcing huge chunks of the designing and building process, you will be lucky if the net result is as “intelligent” as the original General Problem Solving system.

Most readers will have experienced exactly this kind of bureaucratic nonsense when encountering a “good employee” who has no power or incentive to do anything but follow a set of rules that they have been warned to follow regardless of the actual result for the customer. At bottom then, the root cause of problems illustrated in chapter ten is not “Artificial Intelligence” or “Robotics” or “Driverless Cars.” The root issue is what might be called “Deep Greed.” The people at the very top of companies squeeze every “spare drop” of productivity from workers thus making choices that are globally intelligent nearly impossible due to a lack of knowledge and lack of incentive. This is combined with what might be called “Deep Hubris” — the idea that all contingencies have been accounted for and that there is no need for feedback, adaptation, or work-arounds.

Here is a simple example that I personally ran into, but readers will surely have many of their own examples. I was filling out an on-line form that asked me to list the universities and colleges I attended. Fair enough, but instead of having me type in the institutions, they designers used a pull-down list! There are somewhere between 4000 and 7500 post high-school institutions in the USA and around 50,000 world wide. The mere fact that the exact number is so hard to pin down should give designers pause. Naturally, for most UIs and most computer users, it is much faster to type in the name than scroll to it. Of course, the list keeps changing too. Moreover, there is ambiguity as to where an item should appear in the alphabetical list. For example, my institution, The University of Michigan, could conceivably be listed as “U of M”, “University of Michigan”, “Michigan”, “The University of Michigan”, or “U of Michigan.” As it turns out, it isn’t listed at all. That’s right. Over 43,000 students were enrolled last year at Michigan and it isn’t even on the list at least so far as I could determine in any way. That might not be so bad, but the form does not allow the user to type in anything. In other words, despite the fact that the category “colleges and universities” is ever-changing, a bit fuzzy, and suffers from naming ambiguity, the designers were so confident of their list being perfect that they saw no need for allowing users to communicate in any way that there was an error in the design. If one tries to communicate “out of band”, one is led to a FAQ page and ultimately a form to fill out. The form presumes that all errors are due to user errors and that all of these user errors are again from a small class of pre-defined errors! That’s right! You guessed it! The “report a problem” form again presumes that every problem that exists in the real world has already been anticipated by the designers. Sigh.

So, to me, the idea that Frank and Katie and Roger would end up as they did does not seem the least bit far-fetched. As I mentioned, the problem is not with “artificial intelligence.” The problem is not even that our society is structured as a hierarchy of greed. In the hierarchy of greed, everyone keeps their place because they are motivated to get just a little more by following the rules they are given from above and keeping everyone below them in line following their rules. It is not a system of involuntary servitude (for most) but a system of voluntary servitude. It seems to the people at each level that they can “do better” in terms of financial rewards or power or prestige by sifting just a little more from those below. To me, this can be likened to the game of Jenga™. In this game, there is a high stack of rectangular blocks. Players take turns removing blocks. At some point, of course, what is left of the tower collapses and one player loses. However, if our society collapses from deep greed combined with deep hubris, everyone loses.

Newell, A.; Shaw, J.C.; Simon, H.A. (1959). Report on a general problem-solving program. Proceedings of the International Conference on Information Processing. pp. 256–264.

J.R. Quinlan & E.B. Hunt (1968). A Formal Deductive Problem-Solving System, Journal of the ACM 10/1968; 15(4):625-646. DOI: 10.1145/321479.321487

Thomas, J.C. (1974). An analysis of behavior in the hobbits-orcs problem. Cognitive Psychology 6 , pp. 257-269.

Turing’s Nightmares

Turing’s Nightmares: Chapter 10

31 Thursday Mar 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, feelings, the singularity, Turing

snowfall

Chapter Ten of Turing’s Nightmares explores the role of emotions in human life and in the life of AI systems. The chapter mainly explores the issue of emotions from a practical standpoint. When it comes to human experience, one could also argue that, like human life itself, emotions are an end and not just the means to an end. From a human perspective, or at least this human’s perspective a life without any emotion would be a life impoverished. It is clearly difficult to know the conscious experience of other people, let alone animals, let alone an AI system. My own intuition is that what I feel emotionally is very close to what other people, apes, dogs, cats, and horses feel. I think we can all feel love, both romantic and platonic; that we all know grief; fear; anger; and peace as well as a sense of wonder.

As to the utility of emotions, I believe an AI system that interacts extremely well with humans will need to “understand” emotions and how they are expressed as well as how they can be hidden or faked as well as how they impact human perception, memory, and action. Whether a super-smart AI system needs emotions to be maximally effective is another question.

Consider emotions as a way of biasing perception, action, memory and decision making depending on the situation. If we feel angry, it can make us physically stronger and alter decision making. For the most part, decision making seems impaired, but it can make us feel at least temporarily less guilty about hurting someone or something else. There might be situations where that proves useful. However, since we tend to surround ourselves with people and things we actually like, there many occasions when anger produces counter-productive results.

There is no reason to presume that a super-intelligent AI system would need to copy the emotional spectrum of human beings. It may invent a much richer palette of emotions, perhaps as many as 100 or 10,000 that it finds useful in various situations. The best emotional predisposition for doing geometry proofs may be quite different from the best emotional predisposition for algebra proofs which again could be different from what works best for chess, go, or bridge.

Assuming that even for a very smart machine, it does not possess infinite resources, then it might be worthwhile for it to have different modes whether or not we call them “emotions.” Depending on the type of problem to be solved or situation at hand, not only should different information be input into a system but it should be processed differently as well.

For example, if any organism or machine is facing “life or death” situations, it makes sense to be able to react quickly and focus on information such as the location of potential prey, predators, and escape routes. It also makes sense to use well-tested methods rather than taking an unknown amount of time to invent something entirely new.

People often become depressed when there have been many changes in quick succession. This makes sense because many large changes mean that “retraining” may be necessary. So instead of rushing headlong to make decisions and take actions that may no longer be appropriate, watching what occurs in the new situations first is less prone to error. Similarly, society has developed rituals around large changes such as funerals, weddings, and baptisms. Because society designs these rituals, the individual facing changes does not need to invent something new when their evaluation functions have not yet been updated.

If super-intelligent machines of the future are to keep getting “better” they will have to be able to explore new possibilities. Just as with carbon-based life forms, intelligent machines will need to produce variety. Some varieties may be much more prone to emotional states that others. We could hope that super-intelligent machines might be more tolerant of a variety of emotional styles than people seem to be, but they may not.

The last theme introduced in chapter ten has been touched on before; viz., that values, whether introduced intentionally or unintentionally, will bias the direction of evolution of AI systems for many generations to come. If the people who build the first AI machines feel antipathy toward feelings and see no benefit to them from a practical standpoint, emotions may eventually disappear from AI systems. Does it matter whether we are killed by a feelingless machine, a hungry shark, or an angry bear?

————————————-

For a recent popular article about empathy and emotions in animals, see Scientific American special collector’s edition, “The Science of Dogs and Cats”, Fall, 2015.

Turing’s Nightmares

Turing’s Nightmares: Chapter 9

25 Friday Mar 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, Eden, the singularity, Turing, utopia

Why do we find stories of Eden or Utopia so intriguing? Some tend to think that humanity “fell” from an untroubled state of grace. Some believe that Utopia is still to come brought about by behavioral science (B.F. Skinner’s “Walden Two”) or technology (e.g., Kurzweil’s “The Singularity is Near”). Even American politics often echoes these themes. On the one hand, many conservatives tend to imagine America was a kind of Eden before big government and political correctness and fairness came into play (e.g., “Make American Great Again” used by Reagan as well as Trump; “Restore America Now” 2012 Ron Paul). On the other hand, many liberal slogans point toward a future Utopia (e.g., Gore – “Leadership for the New Millennium”; Obama – “Yes We Can”; Sanders – “A Future To Believe In”). Indeed, much of the underlying conservative vs. liberal “debate” centers around whether you mainly believe that America was close to paradise and we need to get back to it or whether you believe, however good America was, it can move much closer to a Utopian vision in the future.

In Chapter 9 of “Turing’s Nightmares”, the idea of Eden is brought in as a method of testing. In this case, we mainly see the story, not from God’s perspective or the human perspective, but from the perspective of a super-intelligent AI system. Why would such a system try to “create a world”? We could imagine that a super intelligent, super powerful being might be rather out of challenges of the type we humans generally have to face (at least in this interim period between the Eden of the past and the Utopia of the future). What to do? Well, why not explore deep philosophical questions such as good vs. evil and free will vs. determinism by creating worlds to explore these ideas? Debating such questions, at least by human beings, has not led to any universally accepted answers and we’ve been at it for thousands of years. It may be that a full scale experiment is the way to delve more deeply.

However “intelligent” and “knowledgeable” a super-smart computer system of the future might be, it will still most likely be the case that not everything about the future could be predictable. In order to simulate the universe in detail, the computer would have to be as extensive as the universe. Of course, it could be that many possible states “collapse” due to reasons of symmetry or that a much smaller number of “rules” could predict things. There is no way to tell at this point. As we now see the world, even determining how to play a “perfect” game of chess by checking all possible moves would require a “more than universe-sized” computer. It could be the case that a fairly small set of (as yet undetermined) rules could produce the same results. And, maybe that would be true about biological and social evolution. In the wonderful science fiction series, The Foundation Series, by Isaac Asimov, Hari Seldon develops a way to predict the social and political evolution of humanity from a series of equations. Although he cannot predict individual behavior, the collective behavior is predictable. In Chapter 9, our AI system believes that it can predict human outcomes but still has enough doubt that it needs to test out its hypotheses.

There is a very serious and as yet unknown question about our own future implicit in Chapter 9. It could be the case that we humans are fundamentally flawed by our genetic heritage. Some branches of primates behave in a very competitive and nasty fashion. It might well be that our genome will prevent us from stopping global climate change or indeed that we are doomed to over-populate and over-pollute the world or that we will eventually find “world leaders” who will pull nuclear triggers on an atomic armageddon. It might well be that our “intelligence” and even the intelligence of AI systems that start from the seeds of our thoughts are on a local maximum. Maybe dolphins, or sea turtles would be a better starting point. But maybe, just maybe, we can see our way through to overcome whatever mindlessly selfish predispositions we might have to create a greener world that is peaceful, prosperous and fair. Maybe.IMG_2870

Turing’s Nightmares

Walden Two

The Singularity Is Near

Foundation Series

Turing’s Nightmares: Eight

20 Sunday Mar 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ 2 Comments

Tags

AI, Artificial Intelligence, cognitive computing, collaboration, cooperation, the singularity, Turing

OLYMPUS DIGITAL CAMERA

Workshop on Human Computer Interaction for International Development

In chapter 8 of Turing’s Nightmares, I portray a quite different path to ultra-intelligence. In this scenario, people have begun to concentrate their energy, not on building a purely artificial intelligence; rather they have explored the science of large scale collaboration. In this way, referred to by Doug Engelbart among others as Intelligence Augmentation, the “super-intelligence” comes from people connecting.

It could be argued, that, in real life, we have already achieved the singularity. The human race has been pursuing “The Singularity” ever since we began to communicate with language. Once our common genetic heritage reached a certain point, our cultural evolution has far out-stripped our genetic evolution. The cleverest, most brilliant person ever born would still not be able to learn much in their own lifetime compared with what they can learn from parents, siblings, family, school, society, reading and so on.

One problem with our historical approach to communication is that it evolved for many years among a small group of people who shared goals and experiences. Each small group constituted an “in-group” but relations with other groups posed more problems. The genetic evidence, however, has become clear that even very long ago, humans not only met but mated with other varieties of humans proving that some communication is possible even among very different tribes and cultures.

More recently, we humans started traveling long distances and trading goods, services, and ideas with other cultures. For example, the brilliance of Archimedes notwithstanding, the idea of “zero” was imported into European culture from Arab culture. The Rosetta Stone illustrates that even thousands of years ago, people began to see the advantages of being able to translate among languages. In fact, modern English contains phrases even today that illustrate that the Norman conquerers found it useful to communicate with the conquered. For example, the phrase, “last will and testament” was traditionally used in law because it contains both the word “will” with Germanic/Saxon origins and the word “testament” which has origins in Latin.

Automatic translation across languages has made great strides. Although not so accurate as human translation, it has reached the point where the essence of many straightforward communications can be usefully carried out by machine. The advent of the Internet, the web, and, more recently google has certainly enhanced human-human communication. It is worth noting that the tremendous value of google arises only a little through having an excellent search engine but much more though the billions of transactions of other human beings. People are already exploring and using MOOCs, on-line gaming, e-mail and many other important electronically mediated tools.

Equally importantly, we are learning more and more about how to collaborate effectively both remotely and face to face, both synchronously and asynchronously. Others continue to improve existing interfaces to computing resources and inventing others. Current research topics include how to communicate more effectively across cultural divides; how to have more coherent conversations when there are important differences in viewpoint or political orientation. All of these suggest that as an alternative or at least an adjunct to making purely separate AI systems smarter, we can also use AI to help people communicate more effectively with each other and at scale. Some of the many investigators in these areas include Wendy Kellogg, Loren Terveen, Joe Konstan, Travis Kriplean, Sherry Turkle, Kate Starbird, Scott Robertson, Eunice Sari, Amy Bruckman, Judy Olson, and Gary Olson. There are several important conferences in the area including European Conference on Computer Supported Cooperative Work, and Conference on Computer Supported Cooperative Work, and Communities and Technology. It does not seem at all far-fetched that we can collectively learn, in the next few decades how to take international collaboration to the next level and from there, we may well have reached “The Singularity.”

————————————-

For further reading, see: Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.

Thomas, J. C. (2012). Patterns for emergent global intelligence. In Creativity and Rationale: Enhancing Human Experience By Design J. Carroll (Ed.), New York: Springer.

Thomas, J. C., Kellogg, W.A., and Erickson, T. (2001). The Knowledge Management puzzle: Human and social factors in knowledge management. IBM Systems Journal, 40(4), 863-884.

Thomas, J. C. (2001). An HCI Agenda for the Next Millennium: Emergent Global Intelligence. In R. Earnshaw, R. Guedj, A. van Dam, and J. Vince (Eds.), Frontiers of human-centered computing, online communities, and virtual environments. London: Springer-Verlag.

Thomas, J.C. (2016). Turing’s Nightmares. Available on Amazon. http://tinyurl.com/hz6dg2

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 664 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...