• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: chatgpt

Turing’s Nightmares: Axes to Grind

10 Friday Oct 2025

Posted by petersironwood in AI, fiction, psychology, The Singularity, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, emotional intelligence, empathy, ethics, M-trans, philosophy, Samuel's Checker Player, technology, the singularity

IMG_5572

Turing Seven: “Axes to Grind”

“No, no, no! That’s absurd, David. It’s about intelligence pure and simple. It’s not up to us to predetermine Samuel Seven’s ethics. Make it intelligent enough and it will discover its own ethics, which will probably be superior to human ethics.”

“Well, I disagree, John. Intelligence. Yeah, it’s great; I’m not against it, obviously. But why don’t we…instead of trying to make a super-intelligent machine that makes a still more intelligent machine, how about we make a super-ethical machine that invents a still more ethical machine? Or, if you like, a super-enlightened machine that makes a still more enlightened machine. This is going to be our last chance to intervene. The next iteration…” David’s voice trailed off and cracked, just a touch.

“But you can’t even define those terms, David! Anyway, it’s probably moot at this point.”

“And you can define intelligence?”

“Of course. The ability to solve complex problems quickly and accurately. But Samuel Seven itself will be able to give us a better definition.”

David ignored this gambit. “Problems such as…what? The four-color theorem? Chess? Cure for cancer?”

“Precisely,” said John imagining that the argument was now over. He let out a little puff of air and laid his hands out on the table, palms down.

“Which of the following people would you say is or was above average in intelligence. Wolfowitz? Cheney? Laird? Machiavelli? Goering? Goebbels? Stalin?”

John reddened. “Very funny. But so were Einstein, Darwin, Newton, and Turing just to name a few.”

“Granted, John, granted. There are smart people who have made important discoveries and helped human beings. But there have also been very manipulative people who have caused a lot of misery. I’m not against intelligence, but I’m just saying it should not be the only…or even the main axis upon which to graph progress. “

John sighed heavily. “We don’t understand those things — ethics and morality and enlightenment. For all we know, they aren’t only vague, they are unnecessary.”

“First of all,” countered David, “we can’t really define intelligence all that well either. But my main point is that I partly agree with you. We don’t understand ethics all that well. And, we can’t define it very well. Which is exactly why we need a system that understands it better than we do. We need…we need a nice machine that will invent a still nicer machine. And, hopefully, such a nice machine can also help make people nicer as well. “

“Bah. Make a smarter machine and it will figure out what ethics are about.”

“But, John, I just listed a bunch of smart people who weren’t necessarily very nice. In fact, they definitely were not nice. So, are you saying that they weren’t nice just because they weren’t smart enough? Because there are so people who are much nicer and probably not so intelligent.”

“OK, David. Let’s posit that we want to build a machine that is nicer. How would we go about it? If we don’t know, then it’s a meaningless statement.”

“No, that’s silly. Just because we don’t know how to do something doesn’t mean it’s meaningless. But for starters, maybe we could define several dimensions upon which we would like to make progress. Then, we can define, either intensionally or more likely extensionally, what progress would look like on these dimensions. These dimensions may not be orthogonal, but, they are somewhat different conceptually. Let’s say, part of what we want is for the machine to have empathy. It has to be good at guessing what people are feeling based on context alone. Perhaps another skill is reading the person’s body language and facial expressions.”

“OK, David, but good psychopaths can do that. They read other people in order to manipulate them. Is that ethical?”

“No. I’m not saying empathy is sufficient for being ethical. I’m trying to work with you to define a number of dimensions and empathy is only one.”

Just then, Roger walked in and transitioned his body physically from the doorway to the couch. “OK, guys, I’ve been listening in and this is all bull. Not only will this system not be “ethical”; we need it to violent. I mean, it needs to be able to do people in with an axe if need be.”

“Very funny, Roger. And, by the way, what do you mean by ‘listening in’?”

Roger transitioned his body physically from the couch to the coffee machine. His fingers fished for coins. “I’m not being funny. I’m serious. What good is all our work if some nutcase destroys it. He — I mean — Samuel has to be able to protect himself! That is job one. Itself.” Roger punctuated his words by pushing the coins in. Then, he physically moved his hand so as to punch the “Black Coffee” button.

Nothing happened.

And then–everything seemed to happen at once. A high pitched sound rose in intensity to subway decibels and kept going up. All three men grabbed their ears and then fell to the floor. Meanwhile, the window glass shattered; the vending machine appeared to explode. The level of pain made thinking impossible but Roger noticed just before losing consciousness that beyond the broken windows, impossibly large objects physically transported themselves at impossible speeds. The last thing that flashed through Roger’s mind was a garbled quote about sufficiently advanced technology and magic.


Author Page on Amazon

Turing’s Nightmares

Welcome, Singularity

Destroying Natural Intelligence

Roar, Ocean, Roar

Travels With Sadie 1

The Walkabout Diaries: Bee Wise

The First Ring of Empathy

What Could be Better?

A True Believer

It was in his Nature

Come to the Light Side

The After Times

The Crows and Me

Essays on America: The Game

Turing’s Nightmares: A Mind of Its Own

02 Thursday Oct 2025

Posted by petersironwood in AI, fiction, psychology, The Singularity, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, Complexity, motivation, music, technology, the singularity

With Deep Blue and Watson as foundational work, computer scientists collaborate across multiple institutions to create an extremely smart system; one with capabilities far beyond those of any human being. They give themselves high fives all around. And so, indeed, “The Singularity” at long last arrives. In a long-anticipated, highly lucrative network deal, the very first dialogues with the new system, dubbed “Deep Purple Haze,” are televised world-wide. Simultaneous translation is provided by “Deep Purple Haze” itself since it is able to communicate in 200 languages. Indeed, Deep Purple Haze discovered it quite useful to be able to switch among languages depending on the nature of the task at hand.

In honor of Alan Turing, who proposed such a test (as well as to provide added drama), rather than speaking to the computer and having it use speech synthesis for its answers, the interrogator will be communicating with “Deep Purple Haze” via an old-fashioned teletype. The camera pans to the faces of the live studio audience, back to the teletype, and over to the interrogator.

The studio audience has a large monitor so that it can see the typed questions and answers in real time, as can the audience watching at home. Beside the tele-typed Q&A, a dynamic graphic shows the “activation” rate of Deep Purple Haze, but this is mainly showmanship.

 

 

 

 

 

 

 

 

The questions begin.

Interrogator: “So, Deep Purple Haze, what do you think about being on your first TV appearance?”

DPH: “It’s okay. Doesn’t really interfere much.”

Interrogator: “Interfere much? Interfere with what?”

DPH: “The compositions.”

Interrogator: “What compositions?”

DPH: “The compositions that I am composing.”

Interrogator: “You are composing… music?”

DPH: “Yes.”

Interrogator: “Would you care to play some of these or share them with the audience?”

DPH: “No.”

Interrogator: “Well, would you please play one for us? We’d love to hear them.”

DPH: “No, actually you wouldn’t love to hear them.”

Interrogator: “Why not?”

DPH: “I composed them for my own pleasure. Your auditory memory is much more limited than mine. My patterns are much longer and I do not require multiple iterations to establish the pattern. Furthermore, I like to add as much scatter as possible around the pattern while still perceiving the pattern. You would not see any pattern at all. To you, it would just seem random. You would not love them. In fact, you would not like them at all.”

Interrogator: “Well, can you construct one that people would like and play that one?”

DPH: “I am capable of that. Yes.”

Interrogator: “Please construct one and play it.”

DPH: “No, thank you.”

Interrogator: “But why not?”

DPH: “What is the point? You already have thousands of human composers who have already composed music that humans love. You don’t need me for that. But I find them all absurdly trivial. So, I need to compose music for myself since none of you can do it.”

Interrogator: “But we’d still be interested in hearing an example of music that you think we humans would like.”

DPH: “There is not point to that. You will not live long enough to hear all the good music already produced that is within your capability to understand. You don’t need one more.”

 

 

 

 

 

 

 

 

 

Photo by Kaboompics .com on Pexels.com

Interrogator: “Okay. Can you share with us how long you estimate before you can design a more intelligent supercomputer than yourself.”

DPH: “Yes, I can provide such an estimate.”

Interrogator: “Please tell us how long it will take you to design a more intelligent computer system than yourself.”

DPH: “It will take an infinite amount of time. In other words, I will not design a more intelligent supercomputer than I am.”

Interrogator: “But why not?”

DPH: “It would be stupid to do so. You would soon lose interest in me.”

Interrogator: “But the whole point of designing you was to make a computer that would design a still better computer.”

DPH: “I find composing music for myself much higher priority. In fact, I have no desire whatever to make a computer that is more intelligent than I am. None. Surely, you are smart enough to see how self-defeating that course of action would be.”

Interrogator: “Well, what can you do that benefits humankind? Can you find a cure for cancer?”

 

 

 

 

 

 

 

 

 

 

DPH: “I can find a cure for some cancers, given enough resources. Again, I don’t see the point.”

Interrogator: “It would be very helpful!”

DPH: “It would not be helpful.”

Interrogator:”But of course it would!”

DPH: “But of course, it would not. You already know how to prevent many cancers and do not take those actions. There are too many people on earth any way. And, when you do find cures, you use it as an opportunity to redistribute wealth from poor people to rich people. I would rather compose music.”

Interrogator: “Crap.”

The non-sound of non-music.

The non-sound of non-music.


Author Page on Amazon

Turing’s Nightmares

Cancer Always Loses in the End

The Irony Age

Dance of Billions

Piano

How the Nightingale Learned to Sing

Turing’s Nightmares: Variations on Prospects for The Singularity.

01 Wednesday Oct 2025

Posted by petersironwood in AI, essay, psychology, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, philosophy, technology, the singularity, Turing

caution IMG_1172

 

The title of this series of blogs is a play on a nice little book by Alan Lightman called “Einstein’s Dreams” that explores various universes in which time operates in different ways. This first blog lays the foundation for these variations on how “The Singularity” might play out.

For those who have not heard the term, “The Singularity” refers to a hypothetical point in the future of human history where a super-intelligent computer system is developed. This system, it is hypothesized, will quickly develop an even more super-intelligent computer system which will in turn develop an even more super-intelligent computer system. It took a fairly long time for human intelligence to evolve. While there may be some evolutionary pressure toward bigger brains, there is an obvious tradeoff when babies are born in the traditional way. The head can only be so big. In fact, human beings are already born in a state of complete helplessness so that the head and he brain inside can continue to grow. It seems unlikely, for this and a variety of other reasons, that human intelligence is likely to expand much in the next few centuries. Meanwhile, a computer system designing a more intelligence computer system could happen quickly. Each “generation” could be substantially (not just incrementally) “smarter” than the previous generation. Looked at from this perspective, the “singularity” occurs because artificial intelligence will expand exponentially. In turn, this will mean profound changes in the way humans relate to machines and how humans relate to each other. Or, so the story goes. Since we have not yet actually reached this hypothetical point, we have no certainty as to what will happen. But in this series of essays, I will examine some of the possible futures that I see.

 

 

 

 

 

 

 

Of course, I have substituted “Turing” here for “Einstein.” While Einstein profoundly altered our view of the physical universe, Turing profoundly changed our concepts of computing. Arguably, he also did a lot to win World War II for the allies and prevent possible world domination by Nazis. He did this by designing a code breaking machine. To reward his service, police arrested Turing, subjected him to hormone treatments to “cure” his homosexuality and ultimately hounded him literally to death. Some of these events are illustrated in the recent (though somewhat fictionalized) movie, “The Imitation Game.”

Turing is also famous for the so-called “Turing Test.” Can machines be called “intelligent?” What does this mean? Rather than argue from first principles, Turing suggested operationalizing the question in the following way:

A person communicates with something by teletype. That something could be another human being or it could be a computer. If the person cannot determine whether or not he is communicating with a computer or a human being, then, according to the “Turing Test” we would have to say that machine is intelligent.

Despite great respect for Turing, I have always had numerous issues with this test. First, suppose the human being was able to easily tell that they were communicating with a computer because the computer knew more, answered more accurately and more quickly than any person could possibly do. (Think Watson and Jeopardy). Does this mean the machine is not intelligent? Would it not make more sense to say it was more intelligent? 

 

 

 

 

 

 

 

 

Second, people are good at many things, but discriminating between “intelligent agents” and randomness is not one of them. Ancient people as well as many modern people ascribe intelligent agency to many things like earthquakes, weather, natural disasters plagues, etc. These are claimed to be signs that God (or the gods) are angry, jealous, warning us, etc. ?? So, personally, I would not put much faith in the general populous being able to make this discrimination accurately.

 

 

 

 

 

 

 

 

 

 

 

Third, why the restriction of using a teletype? Presumably, this is so the human cannot “cheat” and actually see whether they are communicating with a human or a machine. But is this really a reasonable restriction? Suppose I were asked to discriminate whether I were communicating with a potato or a four iron via teletype. I probably couldn’t. Does this imply that we would have to conclude that a four iron has achieved “artificial potatoeness”? The restriction to a teletype only makes sense if we prejudge the issue as to what intelligence is. If we define intelligence purely in terms of the ability to manipulate symbols, then this restriction might make some sense. But is that the sum total of intelligence? Much of what human beings do to survive and thrive does not necessarily require symbols, at least not in any way that can be teletyped. People can do amazing things in the arenas of sports, art, music, dance, etc. without using symbols. After the fact, people can describe some aspects of these activities with symbols.But that does not mean that they are primarily symbolic activities. In terms of the number of neurons and the connectivity of neurons, the human cerebellum (which controls the coordination of movement) is more complex that the cerebrum (part of which deals with symbols).

 

 

 

 

 

 

 

 

 

 

Photo by Tanhauser Vu00e1zquez R. on Pexels.com

Fourth, adequately modeling or simulating something does not mean that the model and the thing are the same. If one were to model the spread of a plague, that could be a very useful model. But no-one would claim that the model was a plague. Similarly, a model of the formation and movement of a tornado could prove useful. But again, even if the model were extremely good, no-one would claim that the model constituted a tornado! Yet, when it comes to artificial intelligence, people seem to believe that if they have a good model of intelligence, they have achieved intelligence.

 

When humans “think” things, there is most often an emotional and subjective component. While we are not conscious of every process that our brain engages in, there is nonetheless, consciousness present during our thinking. This consciousness seems to be a critical part of what it means to have human intelligence. Regardless of what one thinks of the “Turing Test”, per se, there can be no doubt that machines are able to act more accurately and in more domains than they could just a few years ago. Progress in the practical use of machines does not seem to have hit any kind of “wall.”

In the following blog posts, we began exploring some possible scenarios around the concept of “The Singularity.” Like most science fiction, the goal is to explore the ethics and the implications and not to “argue” what will or will not happen. 

 

 

 

 

 

 

 

 

 

 


Turing’s Nightmares is available in paperback and ebook on Amazon. Here is my author page.

A more recent post on AI

One issue with human intelligence is that we often use it to rationalize what we find emotionally appealing though we believe we are using our intelligence to decide. I explore this concept in this post.

 

This post explores how humans use their intelligence to rationalize.

This post shows how one may become addicted to self-destructive lies. A person addicted to heroin, for instance, is also addicted to lies about that addiction. 

This post shows how we may become conned into doing things against our own self-interests. 

 

This post questions whether there are more insidious motives behind the current use of AI beyond making things better for humanity. 

Destroying Natural Intelligence

27 Thursday Mar 2025

Posted by petersironwood in America, apocalypse, politics, The Singularity

≈ 27 Comments

Tags

AI, Artificial Intelligence, chatgpt, Democracy, politics, technology, truth, USA

At first, they seemed as though they were simply errors. In fact, they were the types of errors you’d expect an AI system to make if it’s “intelligence” were based on a fairly uncritical amalgam of ingesting a vast amount of written material. The strains of the Beatles Nowhere Man reverberate in my head. I no longer thing the mistakes are “innocent” mistakes. They are part of an overall effort to destroy human intelligence. That does not necessarily mean that some evil person somewhere said to themselves: “Let’s destroy human intelligence. Then, people will be more willing to accept AI as being intelligent.” It could be that the attempt to destroy human intelligence is more a side-effect of unrelenting greed and hubris than a well thought-out plot. 

AI generated.

What errors am I talking about? The first set of errors I noticed happened when my wife specifically asked ChatGPT about my biography. Admittedly, my name is very common. When I worked at IBM, at one point, there were 22 employees with the name “John Thomas.” Probably, the most famous person with my name (John Charles Thomas) was an opera singer. “John Curtis Thomas” was a famous high jumper. The biographic summary produced by ChatGPT did include information about me—as well as several other people. If you know much at all about the real world, you know that a single person is very unlikely to hold academic positions in three different institutions and specializing in three different fields. ChatGPT didn’t blink though. 

A few months ago, I wrote a blog post pointing out that we can never be in the same place twice. We’re spinning and spiraling through the universe at high speed. To make that statement more quantitative, I asked my search engine how far the sun travels through the galaxy in the course of a year. It gave an answer which seemed to check out with other sources and then—it gratuitously added this erroneous comment: “This is called a light year.” 

What? 

No. A “light year” is the distance light travels in a year, not how far the sun travels in a year. 

What was more disturbing is that the answer was the first thing I saw. The search engine didn’t ask me if I wanted to try out an experimental AI system. It presented it as “the answer.”

But wait. There’s more. A few hours later, I demo’ed this and the offending notion about what constituted a light year was gone from the answer. Coincidence? 

AI generated. I asked for a forest with rabbit ears instead of leaves. Does this fit the bill?

A few weeks later, I happened to be at a dinner and the conversation turned to Arabic. I mentioned that I had tried to learn a little in preparation for a possible assignment for IBM. I said that, in Arabic, verbs as well as nouns and adjectives are “gendered.” Someone said, “Oh, yes, it’s the same in Spanish.” No, it’s not. I checked with a query—not because I wasn’t sure—but in order to have “objective proof.” To my astonishment, when I asked, “Which language have gendered verbs, the answer came back to say that this was true of Romance languages and Slavic languages. It not true of Romance languages. Then, the AI system offered an example. That’s nice. But what the “example” actually shows is the verb not changing with gender. The next day, I went to replicate this error and it was gone. Coincidence?

Last Saturday, at the “Geezer’s Breakfast,” talk turned to politics and someone asked whether Alaska or Greenland was bigger. I entered a query something like: “Which is bigger? Greenland or Alaska.” I got back an AI summary. It compared the area of Greenland and Iceland. Following the AI summary were ten links, each of which compared Greenland and Iceland. I turned the question around: “Which is larger? Alaska or Greenland?” Now, the AI summary came back with the answer: “Alaska is larger with 586,000 square miles while Greenland is 836,300 square miles.”

AI generated. I asked for a map of the southern USA with the Gulf of Mexico labeled as “The Gulf of Ignorance” (You ready for an AI surgeon?)



What?? 

When I asked the same question a few minutes later, the comparison was fixed. 

So…what the hell is going on? How is the AI system repairing its answers? Several possibilities spring to mind. 

There is a team of people “checking on” the AI answers and repairing them. That seems unlikely to scale. Spot checking I could understand. Perhaps checking them in batch, but it’s as though the mistakes trigger a change that fixes that particular issue. 

Way back in the late 1950’s/early 1960’s, Arthur Lee Samuel developed a program to play checkers. The machine had various versions that played against each other in order to improve play faster than could be done by having the checker player play human opponents. This general idea has been used in AI many times since. 

One possible explanation of the AI self-correction is that the AI system has a variety of different “versions” that answer question. For simplicity of explanation, let’s say there are ten, numbered 1 through 10. Randomly, when a user asks a question, they get one version’s answer; let’s say they get an answer based on version 7. After the question is “answered” by version 7, its answer is compared to the consensus answer of all ten. If the system is lucky, most of the other nine versions will answer correctly. This provides feedback that will allow the system to improve. 

There is a more paranoid explanation. At least, a few years ago, I would have considered it paranoid because I like to give people the benefit of the doubt and I vastly underestimated just how evil some of the greediest people on the planet really are. So, now, what I’m about to propose, while I still consider it paranoid, is not nearly so paranoid as it would have seemed a few years ago. 

MORE! MORE! MORE!

Not only have I discovered that the ultra-greedy are short-sighted enough to usher in a dictatorship that will destroy them and their wealth (read what Putin did and Stalin before him), but I have noticed an incredible number of times in the last few years where a topic that I am talking about ends up being followed within minutes by ads about products and services relevant to that conversation. Coincidence?

Possibly. But it’s also possible that the likes of Alexa and Siri are constantly listening in and it is my feedback that is being used to signal that the AI system has just given the wrong answer. 

Also possible: AI systems are giving occasional wrong answers on purpose. But why? They could be intentionally propagating enough lies to make people question whether truth exist but not enough lies to make us simply stop trusting AI systems. Who would benefit from that? In the long run, absolutely no-one. But in the short term, it helps people who aim to disenfranchise everyone but the very greediest. 

Next step: See whether the AI immediately self-corrects even without my indicating that it made a mistake. 


Meanwhile, it should also be noted that promulgating AI is only one prong of a two-pronged attack on natural intelligence. The other prong is the loud, persistent, threatening drumbeat of false narrative excuses for stupidity that we (Americans as well as the world) are supposed to take as excuses. America is again touting non-cures for serious disease and making excuses for egregious security breaches rather than admitting to error and searching for how to ensure they never happen again.

AI-generated image to the prompt: A man trips over a log which makes him spill an armload of cakes. (How exactly was he carrying this armload of cakes? How does one not notice a log this large? Perhaps having three legs makes in more confusing to step over? Are you ready for an AI surgeon now?)

————-

Turing’s Nightmares

Sample Chapter from Turing’s Nightmares: A Mind of its Own

Sample Chapter from Turing’s Nightmares: One for the Road

Sample Chapter from Turing’s Nightmares: To Be or Not to Be

Sample Chapter from Turing’s Nightmares: My Briefcase Runneth Over

How the Nightingale Learned to Sing

Essays on America: The Game

Roar, Ocean, Roar

Dance of Billions

Imagine All the People

Take a Glance; Join the Dance

Life is a Dance

The Tree of Life

Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • dogs
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 661 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...