• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Category Archives: Uncategorized

Reframing the Problem: Paperwork & Working Paper

04 Thursday Dec 2025

Posted by petersironwood in AI, creativity, design rationale, HCI, management, psychology, Uncategorized, user experience

≈ Leave a comment

Tags

AI, ethics, leadership, life, philosophy, politics, problem finding, problem formulation, problem framing, problem solving, thinking, truth

Photo by Pixabay on Pexels.com

Reframing the Problem: Paperwork & Working Paper



This is the second in a series about the importance of correctly framing a problem. Generally, at least in formal American education, the teacher gives you a problem. Not only that, if you are in Algebra class, you know the answer will be an answer based in Algebra. If you are in art class, you’re expected to paint a picture. If you painted a picture in Algebra class, or wrote down a formula in Art Class, they would send you to the principal for punishment. But in real life, how a problem is presented may actually be far from the most elegant solution to the real problem.

Doing a google search on “problem solving” just now yielded 208 million results. Entering “problem framing” only had 182 thousand. A thousand times as much emphasis on problem solving as there was on problem framing. [Update: I redid the search today, a little over three years later. On 3/6/2024, I got 542M hits on “problem solving” and 218K hits on “problem framing” — increases in both but the ratio is even worse than it was in 2021] [Second update: I did the search today, Dec. 4th, 2025, and the information was not given–but that’s the subject of a different post].

Let’s think about that ratio of 542 million to 218 thousand for a moment. Roughly, that’s 2000 to 1. If you have wrongly framed the problem, you not only will not have solved the real problem; what’s worse, you will have often convinced yourself and others that you have solved the problem. This will make it much more difficult to recognize and solve the real problem even for a solitary thinker. And to make a political change required to redirect hundreds or thousands will be incalculably more difficult. 

All of that brings us to today’s story. For about a decade, I worked as executive director of an AI lab for a company in the computers & communication industry. At one point, in the late 1980’s, all employees were all supposed to sign some new paperwork. An office manager called from a building several miles away asking me to have my admin work with his admin to sign up a schedule for all 45 people in my AI lab to go over to his office and sign this paperwork as soon as possible. That would be a mildly interesting logistics problem, and I might even be tempted to step in and help solve it. More likely, if I tried to solve it, some much brighter & more competent colleague would have done it much faster. 

Photo by Charlie Solorzano on Pexels.com

But why?

Why would I ask each of 45 people to interrupt their work; walk to their cars; drive in traffic; park in a new location; find this guy’s office; walk up there; sign some paper; walk out; find their car; drive back; park again; walk back to their office and try to remember where the heck they were? Instead, I told him that wasn’t happening but he’d be welcome to come over here and have people sign the paperwork. 

You could make an argument that that was 4500% improvement in productivity, but I think that understates the case. The administrator’s work, at least in this regard, was to get this paperwork signed. He didn’t need to do mental calculations to tie these signings together. On the other hand, a lot of the work that the AI folks did was hard mental work. That means that interrupting them would be much more destructive than it would to interrupt the administrator in his watching someone sign their name. Even that understates the case because many of the people in AI worked collaboratively and (perhaps you remember those days) people were working face to face. Software tools to coordinate work were not as sophisticated as they are now. Often, having one team member disappear for a half hour would not only impact their own work, it would impact the work of everyone on the team. 

Quantitatively comparing apples and oranges is always tricky. Of course, I am also biased because my colleagues are people I greatly admire. Nonetheless, it seems obvious that the way the problem was presented was a non-optimal “framing.” It may or may not have been presented that way because of a purely selfish standpoint; that is, wanting to do what’s most convenient for oneself rather than what’s best for the company as a whole. I suspect that it was more likely just the first idea that occurred to him. But in your own life, beware. Sometimes, you will mis-frame a problem because of “natural causes.” But sometimes, people may intentionally hand you a bad framing because they view it as being in their interest to lead you to solve the wrong problem. 

Politics, of course, takes us into another realm entirely. People with political power may pretend to solve one problem while they are really following a completely different agenda. One could imagine, for instance, a head of state claiming to pursue a war for his people when he’s really doing it to keep in power. Or, they could claim they are making cities safe by deploying troops when they are really interested in suppressing the vote in areas that can see through his cons. Or, a would-be dictator could claim they are spending your tax dollars to make government more efficient when that has nothing to do with what they are *actually* doing–which is to collect data on citizens and make the government ineffective in order to have people lose confidence in government and instead invest in private solutions.

Even when people’s motivations are noble or at least clear, it is still quite easy to frame a problem wrongly because of surface features. It may look like a problem that requires calculus, but it is a problem that actually requires psychology or it may look like a problem that requires public relations expertise but what is actually required is ethical leadership.

Photo by Nikolay Ivanov on Pexels.com

——————————————————

Author Page on Amazon

Tools of Thought

A Pattern Language for Collaboration and Cooperation

The Myths of the Veritas: The First Ring of Empathy

Essays on America: Wednesday

Essays on America: The Stopping Rule

Essays on America: The Update Problem

My Cousin Bobby

Facegook

The Ailing King of Agitate

Dog Trainers

The Doorbell’s Ringing! Can you get it?

02 Tuesday Dec 2025

Posted by petersironwood in creativity, design rationale, psychology, story, Uncategorized, user experience

≈ Leave a comment

Tags

books, problem finding, problem formulation, problem framing, problem solving, story, thinking

Photo by Little Visuals on Pexels.com

After a long day’s work, I arrived home to a distraught wife. Not, “Hi, sweetheart” but “This doorbell is driving me crazy!” 

Me: “What doorbell? What are you talking about?” 

People differ in how they perceive the world around them. In my case, for instance, I’m very easily distracted by movement in my visual field. Noise can be annoying, but it rarely rises to that level. For instance, when TV commercials come on, I simply “tune them out” and instead tune in to my own thoughts. My high frequency hearing isn’t too great either. So, at first, I didn’t understand what my wife was referring to. 

Beep. 

Photo by Luisa Fernanda Bayona on Pexels.com

“That! That doorbell beep!” 

Ah, now I understood. And, there it went again. Once I knew what to listen for, I had to agree it was annoying though much more annoying to my wife because she’s more tuned in to sound than I am and her ability to hear high frequencies is also better.

She then upped the ante. “I have to leave. I can’t stand it! You have to make it stop!” 

I looked at the wall between our entryway and the kitchen. That’s where the doorbell ringer was. I unscrewed a couple of screws and removed the housing. Inside was the actual doorbell and three wires. A quick snip should at least stop the noise until we figured out a more permanent fix. I sighed. I suspected we would have to buy a new doorbell. Then, I laughed a bit as the Hollywood scenes from a hundred movies flashed before my eyes:

The Hero finds the bomb, with its conveniently placed timer, but it’s counting down 30 seconds, 29, 28. He has to cut to cut a wire! But which one!?

The consequences of my error would not be so great. Still…So, I cut the black wire.

Photo by Pixabay on Pexels.com



BEEP! BEEP! 

OK. I cut the red wire.

BEEP! BEEP! 

OK. I cut the green wire, the last wire. I was having trouble understanding why it would be necessary to cut all three wires. But whatever. I had now cut all three wires.

BEEP! BEEP!

??

Electrical circuits don’t work by magic. How can the doorbell be beeping when it has no power? 

It can’t. 

Photo by Pixabay on Pexels.com

It wasn’t the doorbell at all.



Months earlier, my wife & I had attended a Dave Pelz “Short School” for putting, chipping, and sand shots. At that course, we received a small electronic metronome — about the size of a credit card. The metronome was to be used to help make sure you had a consistent rhythm on your putting stroke. Since the course, the metronome had sat atop our upright piano. Apparently, one of the cats had turned it on and then slapped it onto the floor behind the piano. The sounding board both amplified the sound and made it harder to localize. Eventually, we tracked it down, fished out the metronome from behind the piano and clicked it off. Problem solved. 

Except for the non-functional doorbell. 

I had initially “solved” the wrong problem. I had solved the problem of the mis-firing doorbell by cutting all the wires. That was not the problem. I had jumped on to my wife’s formulation and framing of the problem. There are plenty of times in my life when I had solved the wrong problem without any help from someone else. This isn’t a story about assigning blame. It’s a story about the importance of correctly solving the right problem. 

Photo by Karolina Grabowska on Pexels.com


It is very easy to get led into solving the “wrong” problem. 

In the days ahead, I will relate a few more examples. 

———————————————

What about the Butter Dish? 

Index to “Thinking tools” 

Author Page on Amazon

Wednesdays

Labelism

The Update Problem

The Invisibility Cloak of Habit

Where does your loyalty lie?

The stopping rule

Business Process Re-engineering

Measure for Measure

01 Monday Dec 2025

Posted by petersironwood in AI, essay, psychology, science, Uncategorized

≈ Leave a comment

Tags

art, context, decision making, Democracy, framing, HCI, photography, politics, problem formulation, problem framing, problem solving, technology, thinking, Travel, truth, USA, UX

(More or Less is only More or Less, More or Less)

Confusing. I know. Let’s unpack. 

We like to measure things. And, generally, that can be a very good thing. Once we measure and quantify, we can bring to bear the world’s most incredible toolbox of mathematical, engineering, and scientific methods. However…

Photo by Karolina Grabowska on Pexels.com

It often happens that we can’t really measure what we’d like to measure so instead we measure something that we can measure which we imagine to be a close cousin to what we’d really like to measure. That’s still not a bad thing. But it’s risky. And it becomes a lot more risky if we forget that we are measuring a close cousin at best. Sometimes, it’s actually a distant cousin. 

Here’s an example. Suppose a company is interested in the efficient handling of customer service calls (who isn’t?). A typical measure is the average time per call. So, a company might be tempted to reward their Customer Service employees based on having a short average time per call. The result would be that the customer would get back to whatever they were doing more quickly. AND — they wouldn’t have to be on hold in the service queue so long because each call would be handled, on average, more quickly. Good for the customer. The customer service reps would be saving money for the company by answering questions quickly. Some of the money saved will (hopefully) mean raises for the customer service reps. It’s a win/win/win! 

Or is it? 

Imagine this not unlikely scenario:

The managers of the CSR’s (customer service reps) say that there’s a big push from higher management to make calls go more quickly. They may hint that if the average service time goes down enough, everyone will get a raise. Or, they might set much more specific targets to shoot for. 

In either case, the CSR’s are motivated to handle calls more quickly. But how? One way might be for them to learn a whole lot more. They might exchange stories among themselves and perhaps they will participate in designing a system to help them find relevant information more quickly. It might really turn out to be a win/win/win.

On the other hand, one can also imagine that the CSR’s instead simply get rid of “pesky” users as quickly as possible.



“Reboot and call back if that doesn’t work.” 

“Sounds like an Internet issue. Check your router.” 

“That’s an uncovered item.” 

“What’s your account number? Don’t have it? Find it & call back.” 

With answers like this, the average time to handle a call will certainly go down!

But it won’t result in a win/win/win!

Users will have to call back 2, 3, 4 or even more times to get their issues adequately resolved. This will glut the hold queues more than if they had had their question answered properly in the first place. Endlessly alternating between raspy music and a message re-assuring the customer that their call is important to company XYZ, will not endear XYZ’s customers to XYZ.

Ultimately, the CSR’s themselves will likely suffer a drop in morale if they begin to view their “job” to get off the phone as quickly as possible rather than to be as helpful as possible. Likely too, sales will begin to decline. As word gets around that the XYZ company has lousy customer service and comparative reviews amplify this effect, sales will decline even more precipitously. 

Photo by Denniz Futalan on Pexels.com

There are two approaches executives often take in such a situation. 

Some executives (such as Mister Empathy) may be led to believe that quantification should be less emphasized and the important thing is to set the right tone for the CSR’s; to have them really care about their customers. Often, the approach is combined with better training. This can be a good approach.

Some executives (such as Mister Measure) may be led to believe that they need to do more quantification. In addition to average work time, measures will look at the percentage of users whose problem is solved the first time. Ratings of how effective the CSR was will be taken. Some users might even be called for in-depth interviews about their experience.  This can also be a good approach. 

There is no law against doing both, or trying each approach at different times or different places in order to learn which works better. 

There is a third approach however, which never has good results. That is the approach of Mister Misdirect.

Original drawing by Pierce Morgan



Mister Misdirect’s approach is to deny that there is an issue. Mister Misdirect doesn’t improve training. Mister Misdirect doesn’t put people in a better frame of mind. Mister Misdirect does not add additional measures. Mister Misdirect simply demands that CSR’s continue to drive down the average call time of individual calls and that sales go up! In extreme cases, Mister Misdirect may even fudge the numbers and make it appear that things are much better than they really are. Oh, yes. I have seen this with my own eyes. 

Unfortunately, this way of handling things often makes Mister Misdirect an addict. Once an executive starts down the path of making things worse and denying that they did so, they are easily ensnared in a trap. Initially, they only had to take responsibility for instituting, say an incomplete measure and failed to anticipate the possible consequences. But now, having lied about it, they would have to not only admit that they caused a problem, but also that they lied about it.

The next day, when executive wakes up, they have a choice: 


1. Own up 


OR

2. Continue to deny

If they own up, the consequences will be immediately painful.
If they continue to deny, they will immediately feel relieved. Of course, if they have surrounded themselves with lackeys, they will feel more than simply relieved; they will feel vindicated or even proud. It’s not a “real pride” of course. But it’s some distant relative, I suppose. 

For a developer, UX person — or really any worker in an organization, the lesson from this is to anticipate such situations before they happen. If they happen anyway, try to call attention to the situation as quickly as possible. Yes, it may mean you lose favor with the boss. If that is so, then, you really might want to think about getting a new boss. Mister Misdirect will always ultimately fail and when he does, he will drag down a work team, a group, a division, or even an entire company. Mister Misdirect has one and only one framework for solving problems:

Try whatever pops into consciousness. 

If it works, take the credit. 

If it fails, blame an underling. 

But the real fun begins when he takes credit for something and then it turns out it was really a failure. Then, there is only one choice for Mister Misdirect and that is to claim that the false victory was real. From there on, it is Lose/Lose/Lose.

—————————————————-

  
Author Page on Amazon

————————————

Relevant essays, poems, & fiction about the importance of speaking truth to power:

Pattern Language: “Reality Check”

The Truth Train 

The Pandemic Anti-Academic

How The Nightingale Learned to Sing

Process Re-Engineering Comes to Baseball

——————————————————-

Posts on Problem Framing:

How to Frame Your Own Hamster Wheel

Wordless Perfection

Problem Formulation: Who Knows What?

I Went in Seeking Clarity

I Say Hello

Problem Framing: Good Point

Reframing the Problem: Paperwork & Working Paper

The Doorbell’s Ringing! Can you Get it?

Problem Formulation: Who Knows What?

28 Friday Nov 2025

Posted by petersironwood in AI, creativity, design rationale, psychology, Uncategorized

≈ Leave a comment

Tags

AI, browser, HCI, problem formulation, problem framing, problem solving, query, search, seo, technology, thinking, usability, UX

Photo by Nikolay Ivanov on Pexels.com

This post focuses on the importance of discovering who knows what. It’s easy to assume (without thinking!) that everyone knows what you know. 

At IBM Research, around the turn of the century, I was asked to look at improving customer satisfaction about the search function on IBM’s website. Rather than using someone else’s search engine, IBM used one developed at IBM’s Haifa Research lab. It was a very good search engine. Yet, customers were not happy. By way of background, it’s worth noting that compared with many companies who have websites, IBM’s website was meant for a wide variety of users and contained many kinds of information. It was meant to support people buying their first Personal Computer and IT experts at large banks. It had information about a wide variety of hardware, software, and services. The site was designed to serve as an attractor for investors, business partners, and potential employees. In other words, the site was vast and diverse. This made having a good search function particularly important.  

A little study of the existing data which had been collected showed that the mean number of search terms entered by customers was only 1.2. What?? How can that be? Here’s a website with thousands of products and services and designed for use by a huge diversity of users and they were only entering a mean of 1.2 search terms? What were they thinking?!



Of course, there were a handful of situations when one search term might work; e.g., if you wanted to find out everything about a specific product that had a unique one-word name or acronym (which was rare). For most situations though, a more “reasonable” search might be something like: “Open positions IBM Research Austin” or “PC external hard drives” or “LOTUS NOTES training.” 

We invited a sample of users of IBM products & services to come into the lab and do some tasks that we designed to illuminate this issue. In the task, they would need to find specified information on the IBM website while I observed them. One issue became immediately apparent. The search bar on the landing page was far too small. In actuality, users could enter as many search terms as they liked. Their terms would keep scrolling and scrolling until they hit “ENTER.” The developers knew this, but most of our users did not. They assumed they had to “fit” their query into the very small footprint that presented itself visually. Recommendation one was simply to make that space much larger. Once the search bar was expanded to about three times its original size, the number of search terms increased dramatically, as did user satisfaction. 

In this case, the users framed their search problem in terms of: “How can I make the best query that fits into this tiny box.” (I’m not suggesting they said this to themselves consciously, but the visual affordance led them to that self-imposed constraint). The developers thought the users would frame their search problem in terms of: “What’s the best sequence of terms I can put into this virtually infinite window to get the search results I want.” After all, the developers knew that any number of terms could be entered. 

Although increasing the size of the search bar made a big difference, the supposedly good search engine still returned many amazingly bad results. Why? The people at the Haifa lab who had developed the search engine were world class. At some point, I looked at the HTML of some of the web pages. Many web pages had masses of irrelevant metadata. I found some of the people who developed these web pages and discussed things with them. Can you guess what was going on?



Many of the developers of web pages were the same people who had been developing print media for those same products and services. They had no training and no idea about metadata. So, to put up the webpage about product XYZ, they would go to a nice-looking web page about something else, say, training opportunities for ABC. They would copy that entire page, including the metadata, and then set about changing the text about ABC to text about product XYZ. In many cases, they assumed that the strange stuff in angle brackets was some bizarre coding stuff that was necessary for the page to operate properly. They left it untouched. Furthermore, when they “tested” the pages they had created about XYZ, they looked okay. The information about XYZ was there. Problem solved.

Only of course, the problem wasn’t solved. The search engine considered the metadata that described the contents to be even more important than the contents themselves. So, the user would issue a query about XYZ and receive links about ABC because the XYZ page still had the “invisible” metadata about ABC. In this case, many of the website developers thought their problem was to put in good data when what they really needed to do was put in good data and relevant metadata. 

A third issue also revealed itself from watching users. In attempting to do their tasks, many of them suggested that IBM should provide a way for more than one webpage to appear side by side on the screen so that they could, for instance, compare features and functions of two different product models rather than having to copy the information from the web page about a particular model and then compare their notes to the second page. 

Good suggestion. 

Of course, IBM & Microsoft had provided this function. All one had to do was “Right Click” in order to bring up a new window. Remember, these were not naive users. These were people who actually used IBM products. They “knew” how to use the PC and the main applications. Yet, they were still unfamiliar with the use of Right Click. Indeed, allowing on-screen comparisons is one of the handiest uses of Right-Click for many people. 

This issue is indicative of a very pervasive problem. Ironically, it is an outgrowth of good usability! When I began working with computers, almost nothing was intuitive. No-one would even attempt to start programming in FORTRAN or SNOBOL, let alone Assembly Language or Machine Code without looking at the manual. But LOTUS NOTES? A browser? A modern text editor? You can use these without even looking at the manual. That’s a great thing. But — 

…there’s a downside. The downside is that you may have developed procedures that work, but they may be extremely inefficient. You “muddle through” without ever realizing that there’s a much more efficient way to do things. Generally speaking, many users formulate their problem, say, in terms like: “How do I create and edit a document in this editor?” They do not formulate it in terms of: “How do I efficiently create and edit a document in this editor?” The developers know all the splendid features and functions they’ve put into the hardware and software, but the user doesn’t. 

It’s also worth noting that results in HCI/UX are dependent on the context. I would tend to assume that in 2021 (when I first published this post), most PC users knew about right-clicking in a browser even though in 2000, none of the ones I studied seemed to realize it. But —

I could be wrong. 

————————————

The Invisibility Cloak of Habit

Essays on America: Wednesday

Index to a catalog of “best practices” in teamwork & collaboration. 

Author Page on Amazon

What about the butter dish?

Labelism

The Stopping Rule

The Update Problem

Turing’s Nightmares: Eight

21 Friday Nov 2025

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, collaboration, cooperation, openai, peace, philosophy, seva, teamwork, technology, the singularity, Turing, ubuntu, United Peoples Ecosystem

OLYMPUS DIGITAL CAMERA

Workshop on Human Computer Interaction for International Development

In chapter 8 of Turing’s Nightmares, I portray a quite different path to ultra-intelligence. In this scenario, people have begun to concentrate their energy, not on building a purely artificial intelligence; rather they have explored the science of large scale collaboration. In this way, referred to by Doug Engelbart among others as Intelligence Augmentation, the “super-intelligence” comes from people connecting.

Photo by RF._.studio on Pexels.com

It could be argued, that, in real life, we have already achieved the singularity. The human race has been pursuing “The Singularity” ever since we began to communicate with language. Once our common genetic heritage reached a certain point, our cultural evolution has far out-stripped our genetic evolution. The cleverest, most brilliant person ever born would still not be able to learn much in their own lifetime compared with what they can learn from parents, siblings, family, school, society, reading and so on.

Photo by AfroRomanzo on Pexels.com

One problem with our historical approach to communication is that it evolved for many years among a small group of people who shared goals and experiences. Each small group constituted an “in-group” but relations with other groups posed more problems. The genetic evidence, however, has become clear that even very long ago, humans not only met but mated with other varieties of humans proving that some communication is possible even among very different tribes and cultures.

Photo by Min An on Pexels.com

More recently, we humans started traveling long distances and trading goods, services, and ideas with other cultures. For example, the brilliance of Archimedes notwithstanding, the idea of “zero” was imported into European culture from Arab culture. The Rosetta Stone illustrates that even thousands of years ago, people began to see the advantages of being able to translate among languages. In fact, modern English contains phrases even today that illustrate that the Norman conquerers found it useful to communicate with the conquered. For example, the phrase, “last will and testament” was traditionally used in law because it contains both the word “will” with Germanic/Saxon origins and the word “testament” which has origins in Latin. Many other traditional legal terms in English have similar bilingual origins.

Automatic translation across languages has made great strides. Although not so accurate as human translation, it has reached the point where the essence of many straightforward communications can be usefully carried out by machine. The advent of the Internet, the web, and, more recently google has certainly enhanced human-human communication. It is worth noting that the tremendous value of google arises only a little through having an excellent search engine but much more though the billions of transactions of other human beings. People are exploring and using MOOCs, on-line gaming, e-mail and many other important electronically mediated tools.

Photo by Rebecca Zaal on Pexels.com

Equally importantly, we are learning more and more about how to collaborate effectively both remotely and face to face, both synchronously and asynchronously. Others continue to improve existing interfaces to computing resources and inventing others. Current research topics include how to communicate more effectively across cultural divides; how to have more coherent conversations when there are important differences in viewpoint or political orientation. All of these suggest that as an alternative or at least an adjunct to making purely separate AI systems smarter, we can also use AI to help people communicate more effectively with each other and at scale. Some of the many investigators in these areas include Wendy Kellogg, Loren Terveen, Joe Konstan, Travis Kriplean, Sherry Turkle, Kate Starbird, Scott Robertson, Eunice Sari, Amy Bruckman, Judy Olson, and Gary Olson. There are several important conferences in the area including European Conference on Computer Supported Cooperative Work, and Conference on Computer Supported Cooperative Work, and Communities and Technology. It does not seem at all far-fetched that we can collectively learn, in the next few decades how to take international collaboration to the next level and from there, we may well have reached “The Singularity.”

Photo by Patrick Case on Pexels.com

————————————-

For further reading, see: Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.

Thomas, J. C. (2012). Patterns for emergent global intelligence. In Creativity and Rationale: Enhancing Human Experience By Design J. Carroll (Ed.), New York: Springer.

Thomas, J. C., Kellogg, W.A., and Erickson, T. (2001). The Knowledge Management puzzle: Human and social factors in knowledge management. IBM Systems Journal, 40(4), 863-884.

Thomas, J. C. (2001). An HCI Agenda for the Next Millennium: Emergent Global Intelligence. In R. Earnshaw, R. Guedj, A. van Dam, and J. Vince (Eds.), Frontiers of human-centered computing, online communities, and virtual environments. London: Springer-Verlag.

Thomas, J.C. (2016). Turing’s Nightmares. Available on Amazon. http://tinyurl.com/hz6dg2

An Inside View of IBMs Innovation Jam

————-

Author Page on Amazon

Turing’s Nightmares: The Road Not Taken

Pattern Language for Collaboration and Cooperation

The First Ring of Empathy

The Dance of Billions

Imagine All the People…

Roar, Ocean, Roar

Corn on the Cob

Take a Glance; Join the Dance

The Self-Made Man

Indian Wells

Turing’s Nightmares: Seven

20 Thursday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, competition, cooperation, ethics, philosophy, technology, the singularity, Turing

Axes to Grind.

finalpanel1

Why the obsession with building a smarter machine? Of course, there are particular areas where being “smarter” really means being able to come up with more efficient solutions. Better logistics means you can deliver items to more people more quickly with fewer mistakes and with a lower carbon footprint. That seems good. Building a better Chess player or a better Go player might have small practical benefit, but it provides a nice objective benchmark for developing methods that are useful in other domains as well. But is smarter the only goal of artificial intelligence?

What would or could it mean to build a more “ethical” machine? Can a machine even have ethics? What about building a nicer machine or a wiser machine or a more enlightened one? These are all related concepts but somewhat different. A wiser machine, to take one example, might be a system that not only solves problems that are given to it more quickly. It might also mean that it looks for different ways to formulate the problem; it looks for the “question behind the question” or even looks for problems. Problem formulation and problem finding are two essential skills that are seldom even taught in schools for humans. What about the prospect of machines that do this? If its intelligence is very different from ours, it may seek out, formulate, and solve problems that are hard for us to fathom.

For example, outside my window is a hummingbird who appears to be searching the stone pine for something. It is completely unclear to me what he is searching for. There are plenty of flowers that the hummingbirds like and many are in bloom right now. Surely they have no trouble finding these. Recall that a hummingbird has an incredibly fast metabolism and needs to spend a lot of energy finding food. Yet, this one spent five minutes unsuccessfully scanning the stone pine for … ? Dead straw to build a nest? A mate? A place to hide? A very wise machine with freedom to choose problems may well pick problems to solve for which we cannot divine the motivation. Then what?

In this chapter, one of the major programmers decides to “insure” that the AI system has the motivation and means to protect itself. Protection. Isn’t this the major and main rationalization for most of the evil and aggression in the world? Perhaps a super intelligent machine would be able to manipulate us into making sure it was protected. It might not need violence. On the other hand, from the machine’s perspective, it might be a lot simpler to use violence and move on to more important items on its agenda.

This chapter also raises issues about the relationship between intelligence and ethics. Are intelligent people, even on average, more ethical? Intelligence certainly allows people to make more elaborate rationalizations for their unethical behavior. But does it correlate with good or evil? Lack of intelligence or education may sometimes lead people to do harmful things unknowingly. But lots of intelligence and education may sometimes lead people to do harmful things knowingly — but with an excellent rationalization. Is that better?

Even highly intelligent people may yet have significant blind spots and errors in logic. Would we expect that highly intelligent machines would have no blind spots or errors? In the scenario in chapter seven, the presumably intelligent John makes two egregious and overt errors in logic. First, he says that if we don’t know how to do something, it’s a meaningless goal. Second, he claims (essentially) that if empathy is not sufficient for ethical behavior, then it cannot be part of ethical behavior. Both are logically flawed positions. But the third and most telling “error” John is making is implicit — that he is not trying to dialogue with Don to solve some thorny problems. Rather, he is using his “intelligence” to try to win the argument. John already has his mind made up that intelligence is the ultimate goal and he has no intention of jointly revisiting this goal with his colleague. Because, at least in the US, we live in a hyper-competitive society where even dancing and cooking and dating have been turned into competitive sports, most people use their intelligence to win better, not to cooperate better. 

The golden sunrise glows through delicate leaves covered with dew drops.

If humanity can learn to cooperate better, perhaps with the help of intelligent computer agents, we can probably solve most of the most pressing problems we have even without super-intelligent machines. Will this happen? I don’t know. Could this happen? Yes. Unfortunately, Roger is not on board with that program toward better cooperation and in this scenario, he has apparently ensured the AI’s capacity for “self-preservation through violent action” without consulting his colleagues ahead of time. We can speculate that he was afraid that they might try to prevent him from doing so either by talking him out of it or appealing to a higher authority. But Roger imagined he “knew better” and only told them when it was a fait accompli. So it goes.

———–

Turing’s Nightmares

Author Page

Welcome Singularity

Destroying Natural Intelligence

Come Back to the Light Side

The First Ring of Empathy

Pattern Language Summary

Tools of Thought

The Dance of Billions

Roar, Ocean, Roar

Imagine All the People

Essays on America: The Game

Wednesdays

What about the Butter Dish?

Where does your Loyalty Lie?

Labelism

My Cousin Bobby

The Loud Defense of Untenable Positions

Turing’s Nightmares: Six

19 Wednesday Nov 2025

Posted by petersironwood in sports, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, fiction, life, sports, Tennis, Turing

volleyballvictory

Human Beings are Interested in Human Limits.

About nine years ago, an Google AI system won its match over the human Go champion. Does this mean that people will lose interest in Go? I don’t think so. It may eventually mean that human players will learn faster and that top-level human play will increase. Nor, will robot athletes supplant human athletes any time soon.

Athletics provides an excellent way for people to get and stay fit, become part of a community, and fight depression and anxiety. Watching humans vie in athletic endeavors helps us understand the limits of what people can do. This is something that our genetic endowment has wisely made fascinating. To a lesser extent, we are also interested in seeing how fast a horse can run, or how fast a hawk can dive or how complex a routine a dog can learn.

In Chapter 6 of “Turing’s Nightmares” I briefly explore a world where robotic competitors have replaced human ones. In this hypothetical world, the super-intelligent computers also find that sports is an excellent venue for learning more about the world. And, so it is! In “The Winning Weekend Warrior”, I provide many examples of how strategies and tactics useful in the sports world are also useful in business and in life. (There are also some important exceptions that are worth noting. In sports, you play within the rules. In life, you can play with some of the rules.)

Chapter 6 also brings up two controversial points that ethicists and sports enthusiasts should be discussing now. First, sensors are becoming so small, powerful, accurate, and lightweight that is possible to embed them in virtually any piece of sports equipment(e.g., tennis racquets). Few people would call it unethical to include such sensors as training devices. However, very soon, these might also provide useful information during play. What about that? Suppose that you could wear a device that not only enhanced your sensory abilities but also your motor abilities? To some extent, the design of golf clubs and tennis racquets and swimsuits are already doing this. Is there a limit to what would or should be tolerated? Should any device be banned? What about corrective lenses? What about sunglasses? Should all athletes have to compete nude? What about athletes who have to take “performance enhancing” drugs just to stay healthy? Sharapova’s recent case is just one. What about the athlete of the future who has undergone stem cell therapy to regrow a torn muscle or ligament? Suppose a major league baseball pitcher tears a tendon and it is replaced with a synthetic tendon that allows a faster fast ball?

With the ever-growing power of computers and the collection of more and more data, big data analytics makes it possible for the computer to detect patterns of play that a human player or coach would be unlikely to perceive. Suppose a computer system is able to detect reliable “cues” that tip off what pitch a pitcher is likely to throw or whether a tennis player is about to hit down the tee or out wide? Novak Djokovic and Ted Williams were born with exceptional visual acuity. This means that they can pick out small visual details more quickly than their opponents and react to a serve or curve more quickly. But it also means that they are more likely to pick up subtle tip-offs in their opponents motion that give away their intentions ahead of time. Would we object if a computer program analyzed thousands of serves by Jannik Sinner or Carlos Alcaraz in order to detect patterns of tip-offs and then that information was used to help train Alexander Zerev to learn to “read” the service motions of his opponents? Of course, this does not just apply to tennis. It applies to reading a football play option, a basketball pick, the signals of baseline coaches, and so on.

Instead of teaching Zerev these patterns ahead of time, suppose he were to have a device implanted in his back that received radio signals from a supercomputer able to “read” where the serve were going a split second ahead of time and it was this signal that allowed Alexander to anticipate better?

I do not know the “correct” ethical answer for all of these dilemmas. To me, it is most important to be open and honest about what is happening. So, if Lance Armstrong wants to use performance enhancing drugs, perhaps that is okay if and only if everyone else in the race knows that and has the opportunity to take the same drugs and if everyone watching knows it as well. Similarly, although I would prefer that tennis players only use IT for training, I would not be dead set against real time aids if the public knows. I suspect that most fans (like me) would prefer their athletes “un-enhanced” by drugs or electronics. Personally, I don’t have an issue with using any medical technology to enhance the healing process. How do others feel? And what about athletes who “need” something like asthma medication in order to breathe but it has a side-effect of enhancing performance?

Would the advent of robotic tennis players, baseball players or football players reduce our enjoyment of watching people in these sports? I think it might be interesting to watch robots in these sports for a time, but it would not be interesting for a lifetime. Only human athletes would provide on-going interest. What do you think?

Readers of this blog may also enjoy “Turing’s Nightmares” and “The Winning Weekend Warrior.” John Thomas’s author page on Amazon


Welcome Singularity

The Day from Hell

Indian Wells Tennis Tournament

Destroying Natural Intelligence

US Open Closed

Life is a Dance

Take a Glance; Join the Dance

The Self-Made Man

The Dance of Billions 

Math Class: Who are you?

The Agony of the Feet

Wordless Perfection

The Jewels of November

Donnie Gets a Tennis Trophy

Turing’s Nightmares: Chapter Five

17 Monday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, health, medicine, Personal Assistant, philosophy, technology, the singularity, Turing

runtriathalon

An Ounce of Prevention: Chapter 5 of Turing’s Nightmares

Hopefully, readers will realize that I am not against artificial intelligence (after all, I ran an AI lab for a dozen years); nor do I think the outcomes of increased artificial intelligence are all bad. Indeed, medicine offers a large domain where better artificial intelligence is likely to help us stay healthier longer. IBM’s Watson had already begun “digesting” the vast and ever-growing medical literature more than a decade ago. As investigators discover more and more about what causes health and disease, we will also need to keep track of more and more variables about an individual in order to provide optimal care. But more data points also means it will become harder for a time-pressed doctor or nurse to note and remember every potentially relevant detail about a patient. Certainly, personal assistants can help medical personnel avoid bad drug interactions, keep track of history, and “perceive” trends and relationships in complex data more quickly than people are likely to. In addition, in the not too distant future, we can imagine AI programs finding complex relationships and “invent” potential treatments.

Not only medicine, but health provides a number of opportunities for technology to help. People often find it tricky to “force themselves” to follow the rules of health that they know to be good such as getting enough exercise. Fit Bit, Activity Tracker, LoseIt and similar IT apps help track people’s habits and for many, this really helps them stay fit. As computers become more aware of more and more of our personal history, they can potentially find more personalized ways to motivate us to do what is in our own best interest.

In Chapter 5 of Turing’s Nightmares, we find that Jack’s own daughter, Sally is unable to persuade Jack to see a doctor. The family’s PA (personal assistant), however, succeeds. It does this by using personal information about Jack’s history in order to engage him emotionally, not just intellectually. We have to assume that the personal assistant has either inferred or knows from first principles that Jack loves his daughter and the PA also uses that fact to help persuade Jack.

It is worth noting that the PA in this scenario is not at all arrogant. Quite the contrary, the PA acts the part of a servant and professes to still have a lot to learn about human behavior. I am reminded of Adam’s “servant” Lee in John Steinbeck’s East of Eden. Lee uses his position as “servant” to do what is best for the household. It’s fairly clear to the reader that, in many ways, Lee is in charge though it may not be obvious to Adam.

In some ways, having an AI system that is neither “clueless” as most systems are today nor “arrogant” as we might imagine a super-intelligent system to be (and as the systems in chapters 2 and 3 were), but instead feigning deference and ignorance in order to manipulate people could be the scariest stance for such a system to take. We humans do not like being “manipulated” by others, even when it for our own “good.” How would we feel about a deferential personal assistant who “tricks us” into doing things for our own benefit? What if they could keep us from over-eating, eating candy, smoking cigarettes, etc.? Would we be happy to have such a good “friend” or would we instead attempt to misdirect it, destroy it, or ignore it? Maybe we would be happier with just having something that presented the “facts” to us in a neutral way so that we would be free to make our own good (or bad) decision. Or would we prefer a PA to “keep us on track” even while pretending that we are in charge?


Author Page

Welcome, Singularity

Destroying Natural Intelligence

E-Fishiness comes to Mass General Hospital

There’s a Pill for That

Essays on America: The Game

The Self-Made Man

Travels with Sadie

The Walkabout Diaries

The First Ring of Empathy

Donnie Gets a Hamster

Plans for US; some GRUesome

Imagine All the People

Roar, Ocean, Roar

The Dance of Billions

Math Class: Who are you?

Family Matters: Part One

Family Matters: Part Two

Family Matters: Part Three

Family Matters: Part Four

Turing’s Nightmares: Chapter Four

12 Wednesday Nov 2025

Posted by petersironwood in driverless cars, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, illusion, philosophy, SciFi, technology, the singularity, Turing, virtual reality, writing

Considerations of “Turing’s Nightmare’s” Chapter Four: Ceci N’est Pas Une Pipe.

 

pipe

(This is a discussion or “study guide” for chapter four of Turing’s Nightmares). 

In this chapter, we consider the interplay of four themes. First, and most centrally, is the issue of what constitutes “reality.” The second theme is that what “counts” as “reality” or is seen as reality may well differ from generation to generation. The third theme is that AI systems may be inclined to warp our sense of reality, not simply to be “mean” or “take over the world” but to help prevent ecological disaster. Finally, the fourth theme is that truly super-intelligent AI systems might not appear so at all; that is, they may find it more effective to take a demure tone as the AI embedded in the car does in this scenario.

There is no doubt that, artificial intelligence and virtual reality aside, what people perceive is greatly influenced by their symbol systems, their culture and their motivational schemes. Babies as young as six weeks are already apparently less able to make discriminations of differences within what their native language considers a phonemic category than they were at birth. In our culture, we largely come to believe that there is a “right answer” to questions. Sometimes, that’s a useful attitude, but sometimes, it leads to suboptimal behavior.

 

 

Suppose an animal is repeatedly presented with a three-choice problem, let’s say among A, B, and C. A pays off randomly with a reward 1/3 of the time while B and C never pay off. A fish, a rat, or a very young child will quickly only choose A thus maximizing their rewards. However, a child who has been to school (or an adult) will spend considerably more time trying to find “the rule” that allows them (they suppose) to win every time. At first, it doesn’t even occur to them that perhaps there is no rule that will enable them to win every time. Eventually, most will “give up” and choose only A, but in the meantime, they do far worse than does a fish, a rat, or a baby. This is not to say that the conceptual frameworks that color our perceptions and reactions are always a bad thing. They are not. There are obvious advantages to learning language and categories. But our interpretations of events are highly filtered and distorted. Hopefully, we realize that that is so, but often we tend to forget.

 

 

 

 

 

 

 

 

 

Similarly, if you ask the sports fans for two opposing teams to make a close call; for instance, as to whether there was pass interference in American football, or whether a tennis ball near the line was in or out, you tend to find that people’s answers are biased toward their team’s interest even when their calls make no influence on the outcome.

Now consider that we keep striving toward more and more fidelity and completeness in our entertainment systems. Silent movies were replaced by “talkies.” Black and white movies and television were replaced by color. Most TV screens have gotten bigger. There are 3-D movies and more entertainment is in high definition even as sound reproduction has moved from monaural to stereo to surround sound. Research continues to allow the reproduction of smell, taste, tactile, and kinesthetic sensations. Virtual reality systems have become smaller and less expensive. There is no reason to suppose these trends will lessen any time soon. There are many advantages to using Virtual Reality in education (e.g., Stuart, R., & Thomas, J. C. (1991). The implications of education in cyberspace. Multimedia Review, 2(2), 17-27; Merchant, Z., Goetz, E, Cifuentes, L., Keeney-Kennicutt, W., and Davis, T. Effectiveness of virtual reality based instruction on student’s learning outcomes in K-12 and higher education: A meta-analysis, Computers and Education, 70(2014),29-40). As these applications become more realistic and widespread, do they influence the perceptions of what even “counts” as reality?

 

 

 

 

 

 

The answer to this may well depend on the life trajectory of individuals and particularly on how early in their lives they are introduced to virtual reality and augmented reality. I was born in a largely “analogue” age. In that world, it was often quite important to “read the manual” before trying to operate machinery. A single mistake could destroy the machine or cause injury. There is no way to “reboot” or “undo” if you cut a tree down wrongly so it falls on your house. How will future generations conceptualize “reality” versus “augmented reality” versus “virtual reality”?

Today, people often believe it is important for high school students to physically visit various college campuses before making a decision about where to attend. There is no doubt that this is expensive in terms of time, money, and the use of fossil fuels. Yet, there is a sense that being physically present allows the student to make a better decision. Most companies similarly only hire candidates after face to face interviews even though there is no evidence that this adds to the predictive capability of companies with respect to who will be a productive employee. More and more such interviewing, however, is being done remotely. It might well be that a “super-intelligent” system might arrange for people who wanted to visit someplace physically to visit it virtually instead while making it seem as much as possible as though the visit were “real.” After all, left to their own devices, people seem to be making painfully slow (and too slow) progress toward reducing their carbon footprints. AI systems might alter this trajectory to save humanity, to save themselves, or both.

In some scenarios in Turing’s Nightmare the AI system is quite surly and arrogant. But in this scenario, the AI system takes on the demeanor of a humble servant. Yet it is clear (at least to the author!) who really holds the power. This particular AI embodiment sees no necessity of appearing to be in charge. It is enough to make it so and manipulate the “sense of reality” that the humans have.

 

 

 

Turing’s Nightmares

Wednesday

Labelism

Your Cage is Unlocked

Where do you Draw the Line?

The Walkabout Diaries: Sunsets

The First Ring of Empathy

The Invisibility Cloak of Habit

The Dance of Billions

The Truth Train

Roar, Ocean, Roar

Turing’s Nightmares: Chapter Three

11 Tuesday Nov 2025

Posted by petersironwood in The Singularity, Uncategorized

≈ 1 Comment

Tags

AI, Artificial Intelligence, chatgpt, cognitive computing, consciousness, ethics, philosophy, Robotics, technology, the singularity, Turing, writing

In chapter three of Turing’s Nightmares, entitled, “Thanks goodness the computer understands us!,” there are at least four major issues touched on. These are: 1) the value of autonomous robotic entities for improved intelligence, 2) the value of having multiple and diverse AI systems living somewhat different lives and interacting with each other for improving intelligence, 3) the apparent dilemma that if we make truly super-intelligent machines, we may no longer be able to follow their lines of thought, and 4) a truly super-intelligent system will have to rely to some extent on inferences from many real-life examples to induce principles of conduct and not simply rely on having everything specifically programmed. Let us examine these one by one.

 

 

 

 

 

 

 

There are many practical reasons that autonomous robots can be useful. In some practical applications such as vacuuming a floor, a minimal amount of intelligence is all that is needed to do the job under most conditions. It would be wasteful and unnecessary to have such devices communicating information back to some central decision making computer and then receiving commands. In some cases, the latency of the communication itself would impair the efficiency. A “personal assistant” robot could learn the behavioral and voice patterns of a person more easily than if we were to develop speaker independent speech recognition and preferences. The list of practical advantages goes on, but what is presumed in this chapter is that there are theoretical advantages to having actual robotic systems that sense and act in the real world in terms of moving us closer to “The Singularity.” This theme is explored again, in somewhat more depth, in chapter 18 of Turing’s Nightmares.

 

 

 

 

 

 

 

I would not argue that having an entity that moves through space and perceives is necessary to having any intelligence, or for that matter, to having any consciousness. However, it seems quite natural to believe that the qualities both of intelligence and consciousness are influenced by what is possible for the entity to perceive and to do. As human beings, our consciousness is largely influenced by our social milieu. If a person is born or becomes paralyzed later in life, this does not necessarily greatly influence the quality of their intelligence or consciousness because the concepts of the social system in which they exist were founded historically by people that included people who were mobile and could perceive.

Imagine instead a race of beings who could not move through space or perceive any specific senses that we do. Instead, imagine that they were quite literally a Turing Machine. They might well be capable of executing a complex sequential program. And, given enough time, that program might produce some interesting results. But if it were conscious at all, the quality of its consciousness would be quite different from ours. Could such a machine ever become capable of programming a still more intelligent machine?

 

 

 

 

 

What we do know is that in the case of human beings and other vertebrates, the proper development of the visual system in the young, as well as the adaptation to changes (e.g., having glasses that displace or invert images) seems to depend on being “in control” although that control, at least for people, can be indirect. In one ingenious experiment (Held, R. and Hein, A., (1963) Movement produced stimulation in the development of visually guided behavior, Journal of Comparative and Physiological Psychology, 56 (5), 872-876), two kittens were connected on a pivoted gondola and one kitten was able to “walk” through a visual field while the other was passively moved through that visual field. The kitten who was able to walk developed normally while the other one did not. Similarly, simply “watching” TV passively will not do much to teach kids language (Kuhl PK. 2004. Early language acquisition: Cracking the speech code. Nature Neuroscience 5: 831-843; Kuhl PK, Tsao FM, and Liu HM. 2003. Foreign-language experience in infancy: effects of short-term exposure and social interaction on phonetic learning. Proc Natl Acad Sci U S A. 100(15):9096-101). Of course, none of that “proves” that robotics is necessary for “The Singularity,” but it is suggestive.

 

 

 

 

 

 

 

Would there be advantages to having several different robots programmed differently and living in somewhat different environments be able to communicate with each other in order to reach another level of intelligence? I don’t think we know. But diversity is an advantage when it comes to genetic evolution and when it comes to people comprising teams. (Thomas, J. (2015). Chaos, Culture, Conflict and Creativity: Toward a Maturity Model for HCI4D. Invited keynote @ASEAN Symposium, Seoul, South Korea, April 19, 2015.)

 

 

 

 

 

 

The third issue raised in this scenario is a very real dilemma. If we “require” that we “keep tabs” on developing intelligence by making them (or it) report the “design rationale” for every improvement or design change on the path to “The Singularity”, we are going to slow down progress considerably. On the other hand, if we do not “keep tabs”, then very soon, we will have no real idea what they are up to! An analogy might be the first “proof” that you only need four colors to color any planar map. There were so many cases (nearly 2000) that this proof made no sense to most people. Even the algebraic topologists who do understand it take much longer to follow the reasoning than the computer does to produce it. (Although simpler proofs now exist, they all rely on computers and take a long time for humans to verify). So, even if we ultimately came to understand the design rationale for successive versions of hyper-intelligence, it would be way too late to do anything about it (to “pull the plug”). Of course, it isn’t just speed. As systems become more intelligent, they may well develop representational schemes that are both different and better (at least for them) than any that we have developed. This will also tend to make it impossible for people to “track” what they are doing in anything like real time.

 

 

 

 

 

Finally, as in the case of Jeopardy, the advances along the trajectory of “The Singularity” will require that the system “read” and infer rules and heuristics based on examples. What will such systems infer about our morality? They may, of course, run across many examples of people preaching, for instance, the “Golden Rule.” (“Do unto others as you would have them do unto you.”)

 

 

 

 

 

 

 

 

But how does the “Golden Rule” play out in reality? Many, including me, believe it needs to be modified as “Do unto others as you would have them do to you if you were them and in their place.” Preferences differ as do abilities. I might well want someone at my ability level to play tennis against me by pushing me around the court to the best of their ability. But does this mean I should always do that to others? Maybe they have a heart condition. Or, maybe they are just not into exercise. The examples are endless. Famously, guys often imagine that they would like women to comment favorably on their own physical appearance. Does that make it right for men to make such comments to women? Some people like their steaks rare. If I like my steak rare, does that mean I should prepare it that way for everyone else? The Golden Rule is just one example. Generally speaking, in order for a computer to operate in a way we would consider ethical, we would probably need it to see how people treat each other ethically in practice, not just “memorize” some rules. Unfortunately, the lessons of history that the singularity-bound computer would infer might not be very “ethical” after all. We humans often have a history of destroying other entire species when it is convenient, or sometimes, just for the hell of it. Why would we expect a super-intelligent computer system to treat us any differently?

Turing’s Nightmares

IMG_3071

Author Page

Welcome, Singularity

Destroying Natural Intelligence

How the Nightingale Learned to Sing

The First Ring of Empathy

The Walkabout Diaries: Variation

Sadie and The Lighty Ball

The Dance of Billions

Imagine All the People

We Won the War!

Roar, Ocean, Roar

Essays on America: The Game

Peace

← Older posts
Newer posts →

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • July 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • August 2023
  • July 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • AI
  • America
  • apocalypse
  • cats
  • COVID-19
  • creativity
  • design rationale
  • driverless cars
  • essay
  • family
  • fantasy
  • fiction
  • HCI
  • health
  • management
  • nature
  • pets
  • poetry
  • politics
  • psychology
  • Sadie
  • satire
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • user experience
  • Veritas
  • Walkabout Diaries

Meta

  • Create account
  • Log in

Blog at WordPress.com.

  • Subscribe Subscribed
    • petersironwood
    • Join 664 other subscribers
    • Already have a WordPress.com account? Log in now.
    • petersironwood
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...