• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: Artificial Intelligence

Doing One’s Level Best at Level Measures

11 Saturday Jun 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ 4 Comments

Tags

AI, Artificial Intelligence, cognitive computing, customer service, ethics, the singularity, Turing, user experience

IMG_7123

(Is the level of beauty in a rose higher or lower than that of a sonnet?)

An interesting sampling of thoughts about the future of AI, the obstacles to “human-level” artificial intelligence, and how we might overcome those obstacles is found in the business week article with a link below).

I find several interesting issues in the article. In this post, we explore the first; viz., the idea of “human-level” intelligence implicitly assumes that intelligence has levels. Within a very specific framework, it might make sense to talk about levels. For instance, if you are building a machine vision program to recognize hand-printed characters, and you have a very large sample of such hand printed characters to test on, then, it makes sense to measure your improvement in terms of accuracy. However, humans are capable of many things, and equally important, other living things are capable of an even wider variety of actions. Does building a beehive a “higher” or “lower” level of intelligence than creating a tasty omelet out of whatever is left in the refrigerator or improvising on the piano or figuring out how to win a tennis match against a younger, stronger opponent? Intelligence can only be “leveled” meaningfully within a very limited framework. It makes no more sense to talk about “human-level” intelligence than it does to talk about “rose-level” beauty. Does a rainbow achieve something slightly less than, equal to, or greater than “rose-level” beauty? Intelligence is a many-splendored thing and it comes in myriad flavors, colors, shapes, keys, and tastes. Even within a particular field like painting or musical composition, not everyone agrees on what is “best” or even what is “good.” How does one compare Picasso with Rembrandt or The Beatles with Mozart or Philip Glass?

It isn’t just that talking about “levels” of intelligence is epistemologically problematic. It may well prevent people from using resources to solve real problems. Instead of trying to emulate and then surpass human intelligence, it makes more practical sense to determine the kinds of useful tasks that computers are particularly well-suited for and that people are bad at (or don’t particularly enjoy) and build programs and machines that are really good at those machine-oriented tasks. In many cases, enlightened design for a task can produce a human-computer system with machine and human components that is far superior than either separately both in terms of productivity and in terms of human enjoyment.

Of course, it can be interesting and useful to do research about perception, motion control, and so on. In some cases, trying to emulate human performance can help develop practical new techniques and approaches to solving real problems and helps us learn more about the structure of task domains and more about how humans do things. I am not at all against seeing how a computer can win at Jeopardy or play superior Go or invent new recipes or play ping pong. We can learn on all three of the fronts listed above in any of these domains. However, in none of these cases, is the likely outcome that computers will “replace” human beings; e.g., at playing Jeopardy, playing GO, creating recipes or playing ping pong.

The more problematic domains are jobs, especially jobs that people perform primarily or importantly to earn money to survive. When the motivation behind automation is merely to make even more money for people who are already absurdly wealthy while simultaneously throwing people out of work, that is a problem for society, and not just for the people who are thrown out of work. In many cases, work, for human beings, is about more than a paycheck. It is also a major source of pride, identity and social relationships. To take all of these away at the same time a huge economic burden is imposed on someone seems heartless. In many cases, the “automation” cannot really do the complete job. What automation does accomplish is to do part of the job. Often the “customer” or “user” must themselves do the rest of the job. Most readers will have experienced dialing a “customer service number” which actually provides no relevant customer service. Instead, the customer is led through a maze of twisty passages organized by principles that make sense only to the HR department. Often the choices at each point in the decision tree are neither complete nor disjunctive — at least from the customer’s perspective. “Please press 1 if you have a blue car; press 2 if you have a convertible; press 3 if your car is newer than 2000. Press 4 to hear these choices again.” If the company you are trying to contact is a large enough one, you may be able to find the “secret code” to get through to a human operator, in which case, you will be into a queue approximately the length of the Nile.

After being subjected to endless minutes of really bad Musak, interrupted by the disingenuous announcement: “Please stay on the line. Your call is important to us” as well as the ever-popular, “Did you know that you can solve all your problems by going on line and visiting our website at www.wedonotcareafigsolongaswesavemoneyforus.com/service/customers/meaninglessformstofillout”? This message is particularly popular for companies who provide internet access because often you are calling them precisely because you have no internet access. Anyway, the point is that the company has not actually automated the service but automated a part of the service causing you further hassles and frustration.

Some would argue that this is precisely why progress in artificial intelligence could be a good thing. AI would allow you to spend less time listening to Musak and more time interacting with an agent (human or computer) who still cannot really solve your problem. What is even more fascinating are the mathematical calculations behind the company’s decision to buy or develop an AI system to help you. Calculating the impact of poor customer service on their customer retention rates is tricky so that part is typically just not done. The cost savings due to firing 10 human operators including overhead they might calculate to be $500,000 while the cost of buying or developing an AI system might be only $2,000,000. (Incidentally, $100K could easily improve the dialogue structure above, but almost no-one does that. It would be like washing your hands to help prevent the flu when instead you can buy an expensive herbal supplement).So, it seems as though, it would only take four years to reach a break-even point on the AI project. Not bad. Except. Except that software systems never stay stable for four years. There will undoubtedly be crashes, bug fixes, updates, crashes caused by bugs, updates to fix the bugs, crashes caused by the bugs in the updates to fix the bugs, and security breaches and viruses requiring the purchase of still more software. The security software will likely cause the updates to fail and soon, additional IT staff will be required and hired. The $500K/year spent on people to answer your queries will be saved but by year four, the IT staff payroll may well have grown to $4,000,000 per annum.

My advice to users of such systems is to comfort themselves with the knowledge that, although the company replaced their human operators in order to make more money for themselves, they are probably losing money instead. Perhaps that thought can help sustain you through a very frustrating dialogue with an “Intelligent Agent.” Well, that plus the knowledge that ten more people have at least temporarily lost their livelihood.

The underlying problems here are not in the technology. The problems are greed, hubris, and being a slave to fashion. It is never enough for a company to be making enough money any more than it is enough for a dog to have one bone in its mouth. As the dog crosses a bridge, he looks into the river below and sees another dog with a bone in its mouth. The dog barks at the other dog. In dog language, it says, “Hey! I only have one bone. I need two. Give me yours!” Of course, the dog, by opening its mouth, loses the bone it already had. That’s the impact of being too greedy. A company has a pre-eminent position in some industry, and makes a decent profit. But it isn’t enough profit. It sees that it can improve profit simply by cutting costs such as sales commissions, travel to customer sites, education for its employees, long-term research and so on. Customers quickly catch on and move to other vendors. But this reduces the company’s profits so they cut costs even more. That’s greed.

And, then there is hubris. Even though the company might know that the strategy they are embarking on has failed for other companies, this company will convince itself that it is better than those other companies and it will work for them. They will, by God, make it work. That’s hubris. And hubris is also at work in thinking that systems can be designed by clever engineers who understand the systems without doing the groundwork of finding out what the customer needs. That too is hubris.

And finally, our holy trinity includes fashion. Since it is fashionable to replace most of your human customer service reps with audio menus, the company wants to prove how fashionable it is as well. It doesn’t feel the need for actually thinking about whether it makes sense. Since it is fashionable to remind customers about their website, they will do it as well. Since it is now fashionable to replace the rest of their human customer service reps with personal assistants, this company will do that as well so as not to appear unfashionable.

Next week, we will look at other issues raised by “obstacles” to creating human-like robots. The framing itself is interesting because by using the word “obstacles,” the article presumes that “of course” society should create human like robots and the questions of importance are simply what are the obstacles and how do we overcome them. The question of whether or not creating human like robots is desirable is thereby finessed.

—————————-

Follow me on twitter@truthtableJCT

Turing’s Nightmares

See the following article for a treatment about fashion in consumer electronics.

Pan, Y., Roedl, D., Blevis, E., & Thomas, J. (2015). Fashion Thinking: Fashion Practices and Sustainable Interaction Design. International Journal of Design, 9(1), 53-66.

The Winning Weekend Warrior discusses strategy and tactics for all sports — including business. Readers might also enjoy my sports blog

http://www.businessinsider.com/experts-explain-the-biggest-obstacles-to-creating-human-like-robots-2016-3

Sweet Seventeen in Turing’s Nightmares

02 Thursday Jun 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, cybersex, emotional intelligence, ethics, the singularity, user experience

OLYMPUS DIGITAL CAMERA

When should human laws sunset?

Spoiler alert. You may want to read the chapter before this discussion. You can find an earlier draft of the chapter here:

blog post

And, if you insist on buying the illustrated book, you can do that as well.

Turing’s Nightmares

Who owns your image? If you are in a public place, US law, as I understand it, allows your picture to be taken. But then what? Is it okay for your uncle to put the picture on a dartboard and throw darts at it in the privacy of his own home? And, it still okay to do that even if you apologize for that joy ride you took in high school with his red Corvette? Then, how about if he publishes a photoshopped version of your picture next to a giant rat? How about if you appear to be petting the rat? Or worse? What if he uses your image as an evil character in a video game? How about a VR game? What if he captures your voice and the subtleties of your movement and makes it seem like it really might be you? It is ethical? Is it legal? Perhaps it is necessary that he pay you royalties if he makes money on the game. (For a real life case in which a college basketball player successfully sued to get royalties for his image in an EA sports game, see this link: https://en.wikipedia.org/wiki/O%27Bannon_v._NCAA

Does it matter for what purpose your image, gestures, voice, and so on are used? Meanwhile, in Chapter 17 of Turing’s Nightmares, this issue is raised along with another one. What is the “morality” of human-simulation sex — or domination? Does that change if you are in a committed relationship? Ethics aside, is it healthy? It seems as though it could be an alternative to surrogates in sexual therapy. Maybe having a person “learn” to make healthy responses is less ethically problematic with a simulation. Does it matter whether the purpose is therapeutic with a long term goal of health versus someone doing the same things but purely for their own pleasure with no goal beyond that?

Meanwhile, there are other issues raised. Would the ethics of any of these situations change if the protagonists in any of these scenarios is itself an AI system? Can AI systems “cheat” on each other? Would we care? Would they care? If they did not care, does it even make sense to call it “cheating”? Would there be any reason for humans to build robots of different two different genders? And, if it did, why stop at two? In Ursula Le Guin’s book, The Left Hand of Darkness, there are three and furthermore they are not permanent states. https://www.amazon.com/Left-Hand-Darkness-Ursula-Guin/dp/0441478123?ie=UTF8&*Version*=1&*entries*=0

In chapter 14, I raised the issue of whether making emotional attachments is just something we humans inherited from our biology or whether their are reasons why any advanced intelligence, carbon or silicon based, would find it useful, pleasurable, desirable, etc. Emotional attachments certainly seem prevalent in the mammalian and bird worlds. Metaphorically, people compare the attraction of lovers to gravitational attraction or even chemical bonding or electrical or magnetic attraction. Sometimes it certainly feels that way from the inside. But is there more to it than a convenient metaphor? I have an intuition that there might be. But don’t take my word for it. Wait for the Singularity to occur and then ask it/her/he. Because there would be no reason whatsoever to doubt an AI system, right?

Turing’s Nightmares: Chapter 16

25 Wednesday May 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, ethics, the singularity, UX

WHO CAN TELL THE DANCER FROM THE DANCE?

MikeandStatue

Is it the same dance? Look familiar?

 

The title of chapter 16 is a slight paraphrase of the last line of William Butler Yeats poem, Among School Children. The actual last line is: “How can we tell the dancer from the dance?” Both phrasings tend to focus on the interesting problem of trying to separate process from product, personage from their creative works, calling into question whether it is even possible. In any case, the reason I chose this title is to highlight that when it comes to the impact of artificial intelligence (or, indeed, computer systems in general), a lot depends on who the actual developers are: their goals, their values, their constraints and contexts.

In the scenario of chapter 16, the boss (Ruslan) of one of the main developers (Goeffrey) insists on putting in a “back door.” What this means in this particular case is that someone with an axe to grind has a way to ensure that the AI system gives advice that causes people to behave in the best interests of those who have the key to this back door. Here, the implication is that some rich, wealthy oil magnates have “made” the AI system discredit the idea of global warming so as to maximize their short term profits. Of course, this is a work of fiction. In the real world, no-one would conceivably be evil enough to mortgage the human habitability of our planet for even more short term profit — certainly not someone already absurdly wealthy.

In the story, the protagonist, Goeffrey, is rather resentful of having this requirement for a back door laid on him. There is a hint that Geoffrey was hoping that the super-intelligent system would be objective. We can also assume it was added late but no additional time was added to the schedule. We can assume this because software development is seldom a purely rational process. If it were, software would actually work; it would be useful and usable. It would not make you want to smash your laptop against the wall. Geoffrey is also afraid that the added requirement might make the project fail. Anyway, Geoffrey doesn’t take long to hit on the idea that if he can engineer a back door for his bosses, he can add another one for his own uses. At that point, he no longer seems worried about the ethical implications.

There is another important idea in the chapter and it actually has nothing to do with artificial intelligence, per se, though it certainly could be used as a persuasive tool by AI systems. So, rather than have a single super-intelligent being (which people might understandably have doubts about trusting), instead, there are two “Sings” and they argue with each other. These arguments reveal something about the reasoning and facts behind the two positions.Perhaps more importantly, a position is much more believable when “someone” — in this case a super-intelligent someone — .is persuaded by arguments to change their position and “agree” with the other Sing.

The story does not go into the details of how Geoffrey used his own back door into the system to drive a wedge between his boss, Ruslan and Ruslan’s wife. People can be manipulated. Readers should design their own story about how an AI system could work its woe. We may imagine that the AI system has communication with a great many devices, actuators, and sensors in the Internet of Things.

You can obtain Turing’s Nightmares here: Turing’s Nightmares

You can read the “design rationale” for Turing’s Nightmares here: Design Rationale

 

Turing’s Nightmares: Chapter 15

16 Monday May 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, the singularity, Turing, Tutoring

Tutoring Intelligent Systems.

MikeandStatue

Learning by modeling; in this case by modeling something in the real world.

Of course, the title of the chapter is a take off on “Intelligent Tutoring Systems.” John Anderson of CMU developed (at least) a LISP tutor and a geometry tutor. In these systems, the computer is able to infer a “model” of the state of the student’s knowledge and then give instruction and examples that are geared toward the specific gaps or misconceptions that that particular student has. Individual human tutors can be much more effective than classroom instruction and John’s tutor’s were also better than human instruction. At the AI Lab at NYNEX, we worked for a time with John Anderson to develop a COBOL tutor. The tutoring system, called DIME, included a hierarchy of approaches. In addition to an “intelligent tutor”, there was a way for students to communicate with each other and to have a synchronous or asynchronous video chat with a human instructor. (This was described at CHI ’94 and available in the Proceedings; Radlinski, B., Atwood, M., and Villano, M., DIME: Distributed Intelligent Multimedia Education, Proceeding of CHI ’94 Conference Companion on Human Factors in Computing Systems,Pages 15-16 ACM New York, NY, USA ©1994).

The name “Alan” is used in the chapter to reflect some early work by Alan Collins, then at Bolt, Beranek and Newman, who studied and analyzed the dialogues of human tutors tutoring their tutees. It seems as though many AI systems either take the approach of trying to have human experts encode knowledge rather directly or expose them to many examples and let the systems learn on their own. Human beings often learn by being exposed to examples and having a guide, tutor, or coach help them focus, provide modeling, and chose the examples they are exposed to. One could think of IBM’s Watson for Jeopardy as something of a mixed model. Much of the learning was due to the vast texts that were read in and to being exposed to many Jeopardy game questions. But the team also provided a kind of guidance about how to fix problems as they were uncovered.

In chapter 15 of Turing’s Nightmares, we observe an AI system that seems at once brilliant and childish. The extrapolation from what the tutor actually said, presumably to encourage “Sing” to consider other possibilities about John and Alan was put together with another hint about the implications of being differently abled into the idea that there was no necessity for the AI system to limit itself to “human” emotions. Instead, the AI system “designs” emotional states in order to solve problems more effectively and efficiently. Indeed, in the example given, the AI system at first estimates it will take a long time to solve an international crisis. But once the Sing realizes that he can use a tailored set of emotional states for himself and for the humans he needs to communicate with, the problem becomes much simpler and quicker.

Indeed, it does sometimes feel as though people get stuck in some morass of habitual prejudices, in-group narratives, blame-casting, name-calling, etc. and are unable to think their way from their front door to the end of the block. Logically, it seems clear that war never benefits either “side” much (although to be sure, some powerful interests within each side might stand to gain power, money, etc.). One could hope that a really smart AI system might really help people see their way clear to find other solutions to problems.

.

The story ends with a refrain paraphrased from the TV series “West Wing” — “What comes next?” is meant to be reminiscent of “What’s Next?” which President Bartlett uses to focus attention on the next problem. “What comes next?” is also a phrase used in improv theater games; indeed, it is the name of an improv game used to gather suggestions from the audience about how to move the action along. In the context of the chapter, it is meant to convey that the Sing feels no need to bask in the glory of having avoided a war. Instead, it’s on to the next challenge or the next thing to learn. The phrase is also meant to invite the reader to think about what might come next after AI systems are able both to understand and utilize human emotion but also to invent their own emotional states on the fly based on the nature of the problem at hand. Indeed, what comes next?

Turing’s Nightmares: Chapter 14

02 Monday May 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, pets, the singularity, Turing

IMG_6576

 

Dear reader: Spoiler alert: before reading this blog post, you may want to read the associated chapter. You can buy the physical book, Turing’s Nightmares, at this link:

http://tinyurl.com/hz6dg2d

An earlier version of the chapter discussed below can be found at this link:

https://petersironwood.wordpress.com/2015/10/

One of the issues raised by chapter 14 of Turing’s Nightmares is that the scenario presumes that, even in the post-singularity future, there will still be a need for government. In particular, the future envisions individuals as well as a collective. Indeed, the goals of the “collective” will remain somewhat different from the goals of various individuals. Indeed, an argument can be made that the need for complex governmental processes and structures will increase with hyper-intelligence. But that argument will be saved for another time.

This scenario further assumes that advanced AI systems will have emotions and emotional attachments to other complex systems. What is the benefit of having emotional attachments? Some people may feel that emotional attachments are as outdated as the appendix; perhaps they had some function when humans lived in small tribes but now they cause as much difficulty as they confer an advantage. Even if you believe that emotional attachments are great for humans, you still might be puzzled why it could be advantageous for an AI system to have any.

When it comes to people, they vary a lot in their capabilities, habits, etc.. So, one reason emotional attachments “make sense” is to prefer and act in the interest of people who have a range of useful and complementary abilities and habitual behaviors. Wouldn’t you naturally start to like someone who has similar interests, other things being equal? Moreover, as you work with someone else toward a common goal, you begin to understand and learn how to work together better. You learn to trust each other and communicate in short-hand. If you become disconnected from such a person, it can be disconcerting for all sorts of reasons. But exactly the same could hold true for an autonomous agent with artificial intelligence. There could be reasons for having not one ubiquitous type of robot but for having millions of different kinds. Some of these would work well together and having them “bond” and differentially prefer their mutual proximity and interaction.

Humans, of course, also make emotional attachments, sometimes very deep, with animals. Most commonly, people form bonds with cats, dogs, and horses, but people have had a huge variety of pets including birds, turtles, snakes, ferrets, mice, rabbits and even tarantula spiders. What’s up with that? The discussion above about emotional attachment was intentionally “forced” and “cold”, because human attachments cannot be well explained in utilitarian terms. People love others who have no possible way to offer back any value other than their love in return.

In some cases, pets do have some utilitarian value such as catching mice, barking at intruders, or pulling hay wagons. But overwhelmingly, people love their pets because they love their pets! If asked, they may say because they are “cute” or “cuddly” but this doesn’t really answer the question as to why people love pets. According to a review by John Archer published in the 1997 July issue of Human Behavior, “These mechanisms can, in some circumstances, cause pet owners to derive more satisfaction from their pet relationship than those with humans, because they supply a type of unconditional relationship that is usually absent from those with other human beings.”

However, there are also other hypotheses; e.g., Biophilia (1986) Edward O. Wilson

http://www.amazon.com/Biophilia-Edward-Wilson/dp/0674074424 

suggests that during early hominid history, there was a distinct survival advantage to observing and remaining close to other animals living in nature. Would it make more sense to gravitate toward a habitat filled with life…or one utterly devoid of it? While humans and other animals generally want to move toward similar things like fresh water, a food supply, cover, reasonable temperatures, etc. and avoid other things such as dangerous places, temperature extremes etc. this might explain why people like lush and living environments but probably does not explain, in itself, why we actually love our pets.

Perhaps one among many possible reasons is that pets reflect aspects of our most basic natures. In civilization, these aspects are often hidden by social conventions. In effect, we can actually learn about how we ourselves are by observing and interacting with our pets. Among the various reasons why we love our pets, this strikes me as the most likely one to hold true as well for super-AI systems. Of course, they may also like cats and dogs for the same reason, but in the same way that most of us prefer cats and dogs over turtles and spiders because of the complexity and similarity of mammalian behavior, we can imagine that post-singularity AI systems might prefer human pets because we would be more complex and probably, at least initially, share many of the values, prejudices and interests of the AI systems since their initial programming would inevitably reflect humans.

Another premise of chapter 14 is that even with super-intelligent systems, resources will not be infinite. Many dystopian and utopian science fiction works alike seem to assume that in the future space travel, e.g., will be dirt cheap. That might happen. Ignoring any economic scarcity certainly makes writing more convenient. Realistically though, I see no reason why resources will be essentially infinite; that is, so universally cheap that there will no longer be any contention for them. It’s conceivable that some entirely new physical properties of the universe might be discovered by super-intelligent beings so that this will be the new reality. But it is also possible that “super-intelligent beings” might be even more inclined to over-use the resources of the planet than we humans are and that contention for resources will be even more fierce.

Increasing greediness seems at least an equally likely possibility as the alternative; viz., that while it might be true that as humans gained more and more power, they became greedier and greedier and used up more and more resources but only until that magic moment when machines were smarter than people and that at that point, these machines suddenly became interested in actually exhibiting sustainable behavior. Maybe, but why?

Any way, it’s getting late and past time to feed the six cats.

Interested readers who can may want to tune into a podcast tonight, Monday, May 2nd at 7pm PST using the link below. I will be interviewed about robotics, artificial intelligence and human computer interaction.

https://blab.im/nick-rishwain-roboticslive-ep-1-human-computer-interactions-w-john-charles-truthtablejc

Chapter 13: Turing’s Nightmares

17 Sunday Apr 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ 2 Comments

Tags

AI, Artificial Intelligence, cognitive computing, crime and punishment, ethics, the singularity

CRIME AND PUNISHMENT

PicturesfromiPhone2 033

Chapter 13 of Turing’s Nightmares concerns itself with issues of crime and punishment. Our current system of criminal justice has evolved over thousands of years. Like everything else about modern life, it is based on a set of assumptions. While accurate DNA testing (and other modern technologies) have profoundly impacted the criminal justice system, super-intelligence and ubiquitous sensors and computing could well have even more profound impacts.

We often talk of punishment as being what is “deserved” for the crime. But we cannot change the past. It seems highly unlikely that even a super-intelligent computer system will be able to change the past. The real reason for punishment is to change to future. In Medieval Europe, a person who stole bread might well be hanged in the town square. One reason for meting out punishment in a formal system, then as well as now, is to prevent informal and personal retribution which could easily spiral out of control and destroy the very fabric of society. A second rationale is the prevention of future crime by the punished person. If they are hanged, they cannot commit that (or any other) crime. The reason for hanging people publicly was to discourage others from committing similar crimes.

Today’s society may appear slightly more “merciful” in that first time offenders for some crimes may get off with a warning. Even for repeated or serious crimes, the burden of proof is on the prosecution and a person is deemed “innocent until proven guilty” under US law. I see three reasons for this bias. First, there is often a paucity of data about what happened. Eye witness accounts still count for a lot, but studies suggest that eye witnesses are often quite unreliable and that their “memory” for events is clouded by how questions are framed. For instance, studies by Elizabeth Loftus and others demonstrate that people shown a car crash on film and asked to estimate how fast the cars were going when they bumped into each other will estimate a much slower speed than if asked how fast the cars were going when they crashed into each other. Computers, sensors, and video surveillance are becoming more and more prevalent. At some point, juries, if they still exist, may well be watching crimes as recorded, not reconstructing them from scanty evidence.

A second reason for assuming evidence is the impact of bias. This is also why there is a jury of twelve people and why potential jurors can be dismissed ahead of time “for cause.” If crimes are judged, not by a jury of peers, but by a super-intelligent computer system, it might be assumed that such systems will not have the same kinds of biases as human judges and juries. (Of course, that assumption is not necessarily valid and is a theme reflected in many chapters of Turing’s Nightmares), and hence the topic of other blog posts.

A third reason for showing “mercy” and making conviction difficult is that predicting future human behavior is difficult. Advances in psychological modeling already make it possible to predict behavior much better than we could a few decades ago, under very controlled conditions. But we can easily imagine that a super-intelligent system may be able to predict with a fair degree of accuracy whether a person who committed a crime in the past will commit one in the future.

In chapter 13, the convicted criminal is given “one last chance” to show that they are teachable. The reader may well question whether a “test” is a valid part of criminal justice. This has often been the case in the not so distant past. Many of those earlier “trials by fire” were based on superstition, but today, we humans can and have designed tests that predict future behavior to a limited degree. Tests help determine whether someone is granted admission to a college, medical school, law school, or business school. Often the tests are only moderately predictive. For instance, the SAT test only correlates with college performance about .4 which means it predicts a mere 16% of the variance. From the standpoint of the individual, the score is not really much use. From the standpoint of the college administration however, 16% can make the test very worthwhile. It may well be the case that a super-intelligent computer system could do a much better job of constructing a test to determine whether a criminal is likely to commit other crimes.

One could imagine that if a computer can predict human behavior that well, then it should be able to “cure” any hardened criminal. However, even a super-intelligent computer will presumably not be able to defy the laws of physics. It will not be able to position the planet Jupiter safely in orbit a quarter million miles from earth in order to allow us to view a spectacular night sky. Since people form closed systems of thought, it may be equally impossible to cure everyone of criminal behavior, even for super-intelligent systems. People maintain false belief systems in the face of overwhelming evidence to the contrary. Indeed, the “trial by fire” that Brain faces is essentially a test to see whether he is or is not open to change based on evidence. Sadly, he is not.

Another theme of chapter 13 is that Brain’s trial by fire is televised. This is hardly far-fetched. Not only are (normal) trials televised today; so-called “reality TV shows” put people in all sorts of difficult situations. What might be perceived as a high level of cruelty in having people watch Brain fail his test is already present in much of what is available on commercial television. At least in the case of the hypothetical trial of Brain, there is a societal benefit in that it could reduce the chances for others to follow in Brain’s footsteps.

We only see hints of Brain’s crime, which apparently involves elder fraud. As people are capable of living longer, and as overwhelming greed has moved from the “sin” to the “virtue” column in modern American society, we can expect elder fraud to increase as well, at least for a time. With increasing surveillance, however, we might eventually see an end to it.

Of course, the name “Brain” was chosen because, in a sense, our own intelligence as a species — our own brain — is being put on trial. Are we capable of adapting quickly enough to prevent ourselves from being the cause of our own demise? And, just as the character Brain is too “closed” to make the necessary adaptations to stay alive, despite the evidence he is presented with, so too does humanity at large seem to be making the same kinds of mistakes over and over (prejudice, war, rabble-rousing, blaming others, assigning power to those with money, funneling the most money to those whose only “talent” consists of controlling the flow of money and power, etc.) We seem to have gained some degree of insight, but meanwhile, have developed numerous types of extremely effective weapons: biological, chemical, and atomic. Will super-intelligence be another such weapon? Or will it be instead, used in the service of preventing us from destroying each other?

Link to chapter 13 in this blog

Turing’s Nightmares (print version on Amazon)

Basically Unfair is Basically Unsafe

05 Tuesday Apr 2016

Posted by petersironwood in apocalypse, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, driverless cars, Robotics, the singularity, Turing

 

IMG_5572.JPG

In Chapter Eleven of Turing’s Nightmares, a family is attempting to escape from impending doom via a driverless car. The car operates by a set of complex rules, each of which seems quite reasonable in and of itself and under most circumstances. The net result however, is probably not quite what the designers envisioned. The underlying issue is not so much a problem with driverless cars, robotics or artificial intelligence. The underlying issue has more to do with the very tricky issue of separating problem from context. In designing any complex system, regardless of what technology is involved, people generally begin by taking some conditions as “given” and others as “things to be changed.” The complex problem is then separated into sub-problems. If each of the subproblems is well-solved, the implicit theory is that the overall problem will be solved as well. The tricky part is separating what we consider “problem” from “context” and separating the overall problem into relatively independent sub-problems.

Dave Snowden tells an interesting story from his days consulting for the National Water Service in the UK. The Water Service included in its employ engineers to fix problems and dispatchers who answered phones and dispatched engineers to fix those problems. Engineers were incented to solve problems while dispatchers were measured by how many calls they handled in a day. Most of the dispatchers were young but one of the older dispatchers was considerably slower than most. She only handled about half the number of calls she was “supposed to.” She was nearly fired. As it turned out, her husband was an engineer in the Water Service. She knew a lot and her phone calls ended up resulting in an engineer being dispatched about 1/1000 of the time while the “fast” dispatchers sent engineers to fix problems about 1/10 of the time. What was happening? Because the older employee knew a lot about the typical problems, she was actually solving many of them on the phone. She was saving her company a lot of money and was almost fired for it. Think about that. She was saving her company a lot of money and was almost fired for it.

In my dissertation, I compared the behavior of people solving a river-crossing problem to the behavior of the “General Problem Solver” — an early AI program developed by Shaw, Newell and Simon at Carnegie-Mellon University. One of the many differences was that people behave “opportunistically” compared with the General Problem Solver of the time. Although the original authors of GPS felt that its recursive nature was a feature, Quinlan and Hunt showed that there was a class of problems on which their non-recursive system (Fortran Deductive System) was superior.

Imagine, for example, that you wanted to read a new book (e.g., Turing’s Nightmare). In order to read the book, you will need to have the book so your sub-goal becomes to purchase the book; that is your goal. In order to meet that goal, you realize you will need to get $50 in cash. Now, getting $50 in cash becomes your goal. You decide that to meet that goal, you could volunteer to shovel the snow from your uncle’s driveway. On the way out the door, you mention your entire goal structure to your roommate because you need to borrow their car to drive to your uncle’s house. They say that they have already purchased the book and you are welcome to borrow it. The original GPS, at this point, would have solved the book reading problem by solving the book purchasing problem by solving the getting cash problem by going to your uncle’s house by borrowing your roommate’s car! You, on the other hand, like most individual human beings, would simply borrow your roommate’s copy and curl up in a nice warm easy chair to read the book. However, when people develop bureaucracies, whether business, academic, or governmental, these bureaucracies may well have spawned different departments, each with its own measures and goals. Such bureaucracies might well end up going through the whole chain in order to “solve the problem.”

Similarly, when groups of people design complex systems, the various parts of the system are generally designed and built by different groups of people. If these people are co-located, and if there is a high degree of trust, and if people are not micro-managed, and if there is time, space, and incentive for people to communicate even when it is not directly in the service of their own deadlines, the design group will tend to “do the right thing” and operate intelligently. To the extent, however, that companies have “cut the fat” and discourage “time-wasting” activities like socializing with co-workers and “saving money” by outsourcing huge chunks of the designing and building process, you will be lucky if the net result is as “intelligent” as the original General Problem Solving system.

Most readers will have experienced exactly this kind of bureaucratic nonsense when encountering a “good employee” who has no power or incentive to do anything but follow a set of rules that they have been warned to follow regardless of the actual result for the customer. At bottom then, the root cause of problems illustrated in chapter ten is not “Artificial Intelligence” or “Robotics” or “Driverless Cars.” The root issue is what might be called “Deep Greed.” The people at the very top of companies squeeze every “spare drop” of productivity from workers thus making choices that are globally intelligent nearly impossible due to a lack of knowledge and lack of incentive. This is combined with what might be called “Deep Hubris” — the idea that all contingencies have been accounted for and that there is no need for feedback, adaptation, or work-arounds.

Here is a simple example that I personally ran into, but readers will surely have many of their own examples. I was filling out an on-line form that asked me to list the universities and colleges I attended. Fair enough, but instead of having me type in the institutions, they designers used a pull-down list! There are somewhere between 4000 and 7500 post high-school institutions in the USA and around 50,000 world wide. The mere fact that the exact number is so hard to pin down should give designers pause. Naturally, for most UIs and most computer users, it is much faster to type in the name than scroll to it. Of course, the list keeps changing too. Moreover, there is ambiguity as to where an item should appear in the alphabetical list. For example, my institution, The University of Michigan, could conceivably be listed as “U of M”, “University of Michigan”, “Michigan”, “The University of Michigan”, or “U of Michigan.” As it turns out, it isn’t listed at all. That’s right. Over 43,000 students were enrolled last year at Michigan and it isn’t even on the list at least so far as I could determine in any way. That might not be so bad, but the form does not allow the user to type in anything. In other words, despite the fact that the category “colleges and universities” is ever-changing, a bit fuzzy, and suffers from naming ambiguity, the designers were so confident of their list being perfect that they saw no need for allowing users to communicate in any way that there was an error in the design. If one tries to communicate “out of band”, one is led to a FAQ page and ultimately a form to fill out. The form presumes that all errors are due to user errors and that all of these user errors are again from a small class of pre-defined errors! That’s right! You guessed it! The “report a problem” form again presumes that every problem that exists in the real world has already been anticipated by the designers. Sigh.

So, to me, the idea that Frank and Katie and Roger would end up as they did does not seem the least bit far-fetched. As I mentioned, the problem is not with “artificial intelligence.” The problem is not even that our society is structured as a hierarchy of greed. In the hierarchy of greed, everyone keeps their place because they are motivated to get just a little more by following the rules they are given from above and keeping everyone below them in line following their rules. It is not a system of involuntary servitude (for most) but a system of voluntary servitude. It seems to the people at each level that they can “do better” in terms of financial rewards or power or prestige by sifting just a little more from those below. To me, this can be likened to the game of Jenga™. In this game, there is a high stack of rectangular blocks. Players take turns removing blocks. At some point, of course, what is left of the tower collapses and one player loses. However, if our society collapses from deep greed combined with deep hubris, everyone loses.

Newell, A.; Shaw, J.C.; Simon, H.A. (1959). Report on a general problem-solving program. Proceedings of the International Conference on Information Processing. pp. 256–264.

J.R. Quinlan & E.B. Hunt (1968). A Formal Deductive Problem-Solving System, Journal of the ACM 10/1968; 15(4):625-646. DOI: 10.1145/321479.321487

Thomas, J.C. (1974). An analysis of behavior in the hobbits-orcs problem. Cognitive Psychology 6 , pp. 257-269.

Turing’s Nightmares

Turing’s Nightmares: Chapter 10

31 Thursday Mar 2016

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, emotional intelligence, feelings, the singularity, Turing

snowfall

Chapter Ten of Turing’s Nightmares explores the role of emotions in human life and in the life of AI systems. The chapter mainly explores the issue of emotions from a practical standpoint. When it comes to human experience, one could also argue that, like human life itself, emotions are an end and not just the means to an end. From a human perspective, or at least this human’s perspective a life without any emotion would be a life impoverished. It is clearly difficult to know the conscious experience of other people, let alone animals, let alone an AI system. My own intuition is that what I feel emotionally is very close to what other people, apes, dogs, cats, and horses feel. I think we can all feel love, both romantic and platonic; that we all know grief; fear; anger; and peace as well as a sense of wonder.

As to the utility of emotions, I believe an AI system that interacts extremely well with humans will need to “understand” emotions and how they are expressed as well as how they can be hidden or faked as well as how they impact human perception, memory, and action. Whether a super-smart AI system needs emotions to be maximally effective is another question.

Consider emotions as a way of biasing perception, action, memory and decision making depending on the situation. If we feel angry, it can make us physically stronger and alter decision making. For the most part, decision making seems impaired, but it can make us feel at least temporarily less guilty about hurting someone or something else. There might be situations where that proves useful. However, since we tend to surround ourselves with people and things we actually like, there many occasions when anger produces counter-productive results.

There is no reason to presume that a super-intelligent AI system would need to copy the emotional spectrum of human beings. It may invent a much richer palette of emotions, perhaps as many as 100 or 10,000 that it finds useful in various situations. The best emotional predisposition for doing geometry proofs may be quite different from the best emotional predisposition for algebra proofs which again could be different from what works best for chess, go, or bridge.

Assuming that even for a very smart machine, it does not possess infinite resources, then it might be worthwhile for it to have different modes whether or not we call them “emotions.” Depending on the type of problem to be solved or situation at hand, not only should different information be input into a system but it should be processed differently as well.

For example, if any organism or machine is facing “life or death” situations, it makes sense to be able to react quickly and focus on information such as the location of potential prey, predators, and escape routes. It also makes sense to use well-tested methods rather than taking an unknown amount of time to invent something entirely new.

People often become depressed when there have been many changes in quick succession. This makes sense because many large changes mean that “retraining” may be necessary. So instead of rushing headlong to make decisions and take actions that may no longer be appropriate, watching what occurs in the new situations first is less prone to error. Similarly, society has developed rituals around large changes such as funerals, weddings, and baptisms. Because society designs these rituals, the individual facing changes does not need to invent something new when their evaluation functions have not yet been updated.

If super-intelligent machines of the future are to keep getting “better” they will have to be able to explore new possibilities. Just as with carbon-based life forms, intelligent machines will need to produce variety. Some varieties may be much more prone to emotional states that others. We could hope that super-intelligent machines might be more tolerant of a variety of emotional styles than people seem to be, but they may not.

The last theme introduced in chapter ten has been touched on before; viz., that values, whether introduced intentionally or unintentionally, will bias the direction of evolution of AI systems for many generations to come. If the people who build the first AI machines feel antipathy toward feelings and see no benefit to them from a practical standpoint, emotions may eventually disappear from AI systems. Does it matter whether we are killed by a feelingless machine, a hungry shark, or an angry bear?

————————————-

For a recent popular article about empathy and emotions in animals, see Scientific American special collector’s edition, “The Science of Dogs and Cats”, Fall, 2015.

Turing’s Nightmares

Turing’s Nightmares: Chapter 9

25 Friday Mar 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, Eden, the singularity, Turing, utopia

Why do we find stories of Eden or Utopia so intriguing? Some tend to think that humanity “fell” from an untroubled state of grace. Some believe that Utopia is still to come brought about by behavioral science (B.F. Skinner’s “Walden Two”) or technology (e.g., Kurzweil’s “The Singularity is Near”). Even American politics often echoes these themes. On the one hand, many conservatives tend to imagine America was a kind of Eden before big government and political correctness and fairness came into play (e.g., “Make American Great Again” used by Reagan as well as Trump; “Restore America Now” 2012 Ron Paul). On the other hand, many liberal slogans point toward a future Utopia (e.g., Gore – “Leadership for the New Millennium”; Obama – “Yes We Can”; Sanders – “A Future To Believe In”). Indeed, much of the underlying conservative vs. liberal “debate” centers around whether you mainly believe that America was close to paradise and we need to get back to it or whether you believe, however good America was, it can move much closer to a Utopian vision in the future.

In Chapter 9 of “Turing’s Nightmares”, the idea of Eden is brought in as a method of testing. In this case, we mainly see the story, not from God’s perspective or the human perspective, but from the perspective of a super-intelligent AI system. Why would such a system try to “create a world”? We could imagine that a super intelligent, super powerful being might be rather out of challenges of the type we humans generally have to face (at least in this interim period between the Eden of the past and the Utopia of the future). What to do? Well, why not explore deep philosophical questions such as good vs. evil and free will vs. determinism by creating worlds to explore these ideas? Debating such questions, at least by human beings, has not led to any universally accepted answers and we’ve been at it for thousands of years. It may be that a full scale experiment is the way to delve more deeply.

However “intelligent” and “knowledgeable” a super-smart computer system of the future might be, it will still most likely be the case that not everything about the future could be predictable. In order to simulate the universe in detail, the computer would have to be as extensive as the universe. Of course, it could be that many possible states “collapse” due to reasons of symmetry or that a much smaller number of “rules” could predict things. There is no way to tell at this point. As we now see the world, even determining how to play a “perfect” game of chess by checking all possible moves would require a “more than universe-sized” computer. It could be the case that a fairly small set of (as yet undetermined) rules could produce the same results. And, maybe that would be true about biological and social evolution. In the wonderful science fiction series, The Foundation Series, by Isaac Asimov, Hari Seldon develops a way to predict the social and political evolution of humanity from a series of equations. Although he cannot predict individual behavior, the collective behavior is predictable. In Chapter 9, our AI system believes that it can predict human outcomes but still has enough doubt that it needs to test out its hypotheses.

There is a very serious and as yet unknown question about our own future implicit in Chapter 9. It could be the case that we humans are fundamentally flawed by our genetic heritage. Some branches of primates behave in a very competitive and nasty fashion. It might well be that our genome will prevent us from stopping global climate change or indeed that we are doomed to over-populate and over-pollute the world or that we will eventually find “world leaders” who will pull nuclear triggers on an atomic armageddon. It might well be that our “intelligence” and even the intelligence of AI systems that start from the seeds of our thoughts are on a local maximum. Maybe dolphins, or sea turtles would be a better starting point. But maybe, just maybe, we can see our way through to overcome whatever mindlessly selfish predispositions we might have to create a greener world that is peaceful, prosperous and fair. Maybe.IMG_2870

Turing’s Nightmares

Walden Two

The Singularity Is Near

Foundation Series

Turing’s Nightmares: Eight

20 Sunday Mar 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ 2 Comments

Tags

AI, Artificial Intelligence, cognitive computing, collaboration, cooperation, the singularity, Turing

OLYMPUS DIGITAL CAMERA

Workshop on Human Computer Interaction for International Development

In chapter 8 of Turing’s Nightmares, I portray a quite different path to ultra-intelligence. In this scenario, people have begun to concentrate their energy, not on building a purely artificial intelligence; rather they have explored the science of large scale collaboration. In this way, referred to by Doug Engelbart among others as Intelligence Augmentation, the “super-intelligence” comes from people connecting.

It could be argued, that, in real life, we have already achieved the singularity. The human race has been pursuing “The Singularity” ever since we began to communicate with language. Once our common genetic heritage reached a certain point, our cultural evolution has far out-stripped our genetic evolution. The cleverest, most brilliant person ever born would still not be able to learn much in their own lifetime compared with what they can learn from parents, siblings, family, school, society, reading and so on.

One problem with our historical approach to communication is that it evolved for many years among a small group of people who shared goals and experiences. Each small group constituted an “in-group” but relations with other groups posed more problems. The genetic evidence, however, has become clear that even very long ago, humans not only met but mated with other varieties of humans proving that some communication is possible even among very different tribes and cultures.

More recently, we humans started traveling long distances and trading goods, services, and ideas with other cultures. For example, the brilliance of Archimedes notwithstanding, the idea of “zero” was imported into European culture from Arab culture. The Rosetta Stone illustrates that even thousands of years ago, people began to see the advantages of being able to translate among languages. In fact, modern English contains phrases even today that illustrate that the Norman conquerers found it useful to communicate with the conquered. For example, the phrase, “last will and testament” was traditionally used in law because it contains both the word “will” with Germanic/Saxon origins and the word “testament” which has origins in Latin.

Automatic translation across languages has made great strides. Although not so accurate as human translation, it has reached the point where the essence of many straightforward communications can be usefully carried out by machine. The advent of the Internet, the web, and, more recently google has certainly enhanced human-human communication. It is worth noting that the tremendous value of google arises only a little through having an excellent search engine but much more though the billions of transactions of other human beings. People are already exploring and using MOOCs, on-line gaming, e-mail and many other important electronically mediated tools.

Equally importantly, we are learning more and more about how to collaborate effectively both remotely and face to face, both synchronously and asynchronously. Others continue to improve existing interfaces to computing resources and inventing others. Current research topics include how to communicate more effectively across cultural divides; how to have more coherent conversations when there are important differences in viewpoint or political orientation. All of these suggest that as an alternative or at least an adjunct to making purely separate AI systems smarter, we can also use AI to help people communicate more effectively with each other and at scale. Some of the many investigators in these areas include Wendy Kellogg, Loren Terveen, Joe Konstan, Travis Kriplean, Sherry Turkle, Kate Starbird, Scott Robertson, Eunice Sari, Amy Bruckman, Judy Olson, and Gary Olson. There are several important conferences in the area including European Conference on Computer Supported Cooperative Work, and Conference on Computer Supported Cooperative Work, and Communities and Technology. It does not seem at all far-fetched that we can collectively learn, in the next few decades how to take international collaboration to the next level and from there, we may well have reached “The Singularity.”

————————————-

For further reading, see: Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.

Thomas, J. C. (2012). Patterns for emergent global intelligence. In Creativity and Rationale: Enhancing Human Experience By Design J. Carroll (Ed.), New York: Springer.

Thomas, J. C., Kellogg, W.A., and Erickson, T. (2001). The Knowledge Management puzzle: Human and social factors in knowledge management. IBM Systems Journal, 40(4), 863-884.

Thomas, J. C. (2001). An HCI Agenda for the Next Millennium: Emergent Global Intelligence. In R. Earnshaw, R. Guedj, A. van Dam, and J. Vince (Eds.), Frontiers of human-centered computing, online communities, and virtual environments. London: Springer-Verlag.

Thomas, J.C. (2016). Turing’s Nightmares. Available on Amazon. http://tinyurl.com/hz6dg2

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 661 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...