• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: learning organization

Points and Trajectories

06 Sunday Sep 2020

Posted by petersironwood in Uncategorized

≈ Leave a comment

Tags

adaptation, America, cognition, difference, learning organization, politics, psychology, similarity, USA

Finding Common Ground: Points and Trajectories

Much of our education trains us to make distinctions. Little of it trains us to see similarities. Both are important. If you are in the business of foraging for berries, it’s a very good idea to eat the edible ones and not the poison ones. This means that it’s a nice skill to be able to distinguish them.

On the other hand, for many purposes, it’s important to see similarities as well. When it comes to human beings, most of us spend far too much time noticing differences between people and far too little time noticing similarities. 

In a large organization, focusing on differences among employees is often used as an excuse for keeping ineffective, inefficient processes, procedures and tools. For example, a manager might insist that all programming be done in a particular language that might have been state of the art decades earlier. As the organization continues to face deadline after deadline, it looks to the manager as though changing tools or processes will simply delay things further (indeed, it likely will for a time). So, year after year, the management delays a look at better equipment, tools, and training.



Part of their rationale is that some people are still very productive so it can’t be the tools and systems. It’s just that the other people aren’t working hard enough or aren’t smart enough so they promote the really good programmers to managers. Many of the best programmers will none-the-less eventually see themselves as getting more and more out of date in their technical skills and “jump ship” before it’s too late. 

This isn’t to say that there aren’t real differences in programmers. Of course there are. But those differences are too often used as an excuse for bad management. Quite likely, everyone would be more productive if there were changes, but individual differences serve as the “proof” that none are needed.

It isn’t just in programming. When we meet someone, we are much more likely to notice how they differ from others. Are they unusually tall? Short? Striking blue eyes? Or brown? Are they more muscular than average? More obese? Unusually skinny? As they begin to talk, we tick off other boxes: are they smart? Well-read? Do they have an accent? Where were they born? Where do they live? What job do they have? Are they well-off financially? 

Photo by Minervastudio on Pexels.com

Very seldom do we take the time to reflect on how very similar this person is to every other human being and to us, and for that matter, even to other life forms.

Perhaps we should think more about trajectories and less about points.

For example, let’s say you meet someone and they are older than you and bald with a salt and pepper beard. His young son is with him. The son is neither bald, nor bearded, nor older than you are. The three of you are all different! — at this point!

What if you perceived these features, not in terms of points, but in terms of trajectories? For example, age is a moving target. Some day, if he is lucky, the son will be the same age as the father is now. He will likely also grow bald. He might or might not grow a beard but he could. If he did grow such a beard at a young age, it would likely start out all dark and gradually turn to white — not uniformly in time, but with a trajectory that will very likely look a lot like that pattern of change experienced by his father’s beard (and the beards of many other males).

Photo by Arianna Jadu00e9 on Pexels.com



In general, we have more commonality in our trajectories than in our momentary status. For example, your bone density might be greater or less than mine, but the bones of both of us will generally become less dense as we age. And that trajectory is true for virtually everyone. Furthermore, if any of us go up in space, our bone density will lessen quickly. Conversely, if we stay on earth and do weight-bearing exercise, our bone density will increase. 

Trajectories are typically more diagnostic than statics.

For example, would you buy a used car based on simply looking at it, or sitting in it? Of course not. You want to make sure the car actually works. You want to take it out for a test drive.

For your annual physical, the doctor might look at your fasting blood sugar level. If it’s too high or too low, he may order a more sensitive test — a glucose tolerance test. How your body reacts to a sudden influx of sugar is more indicative of underlying health than is static level.

Similarly, your Doctor might simply “listen to your heart” or take a resting cardiogram. A stress test is more revealing of function though.

Aristotle is credited for saying “Character is revealed by choices under pressure.” This is the great truth of literature. It isn’t one’s current status that reveals one’s character. They might have been born rich or poor or blind or in peace or in war. It makes a different to them, of course, but what the reader wants to see is that they make of what they are. How do they bend that trajectory to inspire others, save lives, learn from their errors, reform themselves, or prove their loyalty. Or, on the other side, how do they exhibit mindless selfishness, or betray others, or refuse to change, leaving disaster in their wake.

It isn’t the challenge, per se that’s critically important. It’s how a person either bravely met a challenge — or how they showed their essential cowardice and refused to see the problem; refused to admit the problem; and blamed everyone else for their inevitable failure to solve the problem.


Photo by Nafis Abman on Pexels.com

—————————————-

 Trumpism is a New Religion

You Bet Your Life

Essays on America: Rejecting Adulthood

Essays on America: The Game

Absolute is not just a Vodka

The Update Problem

The Stopping Problem

The Primacy Effect & The Destroyer’s Advantage

Author Page on Amazon

Is Smarter the Answer?

31 Monday Oct 2016

Posted by petersironwood in psychology, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, ethics, learning organization

IMG_1172

Lately, I have been seeing a fair number of questions on Quora (www.quora.com) that basically question whether we humans wouldn’t be “better off” if AI systems do “take over the world.” After all, it is argued, an AI system could be smarter than humans. It is an interesting premise and one worthy of consideration. After all, it is clear that human beings have polluted our planet, have been involved in many wars, have often made a mess of things, and right now, we are a  mere hair’s breadth away from electing a US President who could start an atomic war for no more profound reason than that someone disagreed with him or questioned the size of his hands.

Personally, I don’t think that having AI systems “replace” human beings or “rule them” would be a good thing. There are three main reasons for this. First, I don’t think that the reason human beings are in a mess is because they are not intelligent enough. Second, if AI systems did “replace” human beings, even if such systems were not only more intelligent but also avoided the real reasons for the mess we’re in (greed and hubris, by my lights), they could easily have other flaws of equal magnitude. The third reason is simply that human life is an end in itself, and not a means to an end.  Let us examine these in turn.

First, there are many species of plants and animals on earth that are, by any reasonable definition, much less intelligent than humans and yet have not over-polluted the planet nor put us on the brink of atomic war. There are at least a few other species such as the dolphins that are about as intelligent as we are but who have not had anything like the world-wide negative ecological impact that we have. No, although we often run into individual people who act against our (and their own) interest, and it seems as though we (and they) would be better off if they were more intelligent, I don’t think lack of intelligence (or even education) is the root of the problem with people.

Here are some simple, everyday examples. I went to the grocery store yesterday. When I checked out, someone else packed my groceries. Badly. Indeed, almost every time I go to the store, they pack the groceries badly (if I can’t pack them myself). What do I mean by badly? One full bag had ripe tomatoes at the bottom. Another paper bag was filled with cans of cat food. It was too heavy for the handles. Another bag was packed lightly, but too full so that the handles would break if you hold the bag naturally. It might be tempting to think that this bagger was not very intelligent. I believe that the causes of bad packing are different. First, packers typically (but not universally) pay very little attention to what they are actually doing. They seem to be clearly thinking about something other than what they are doing. Indeed, this described a lot of human activity, at least in the modern USA. Second, packers are in a badly designed system. Once my cart is loaded up, another customer is already having their food scanned on the conveyer belt and the packer is already busy. There is no time to give feedback to the packer on the job they have done. Nor is the situation really very socially appropriate. No matter how gently done, a critique of their performance in front of their colleagues and possibly their manager will be interpreted as an evaluation rather than an opportunity for learning. Even if I did give them feedback, they may or not believe it. It would be better if the packer could follow me home and observe for themselves what a mess they have made of the packing job. I think if they did that a few times, they’d be plenty smart enough to figure out how to pack better.

Unfortunately, packing is not the only example of this type of system. Another common example is that programmers develop software. These people are typically quite intelligent. But they often build their software and never get a chance to see their software in action. Many organizations do not carry out user studies “in the wild” to see how products and services are actually used. It isn’t that the software builders are not smart. But it is problematic that they do not get any real feedback on their decisions. Again, as in the case of the packers, the programmers exist in an organizational structure that makes honest feedback about their errors far too often seem like an evaluation of them, rather than an occasion for learning.

A third example are hotel personnel. A hotel is basically a service business. The cost of the room is a small part of the price. A hotel exists because it serves the customers. Despite this, people behind the desks seldom have incentives and mechanisms to hear, understand and fix problems that their customers encounter. A quintessential example came in Boston when my wife and I were there for a planning meeting for a conference she would be chairing in a few months. When we checked out, the clerk asked whether everything was all right. We replied that the room was too hot but we couldn’t seem to get the air conditioning to work. The clerk said, “Oh, yes! Everyone has that problem. You need to turn on the heater for the A/C to work.” This was a bad temperature control design for starters, but the clerk’s response clearly indicated that they were aware of the problem but had no power (and/or incentive) to fix it.

These are not isolated examples. I am sure that you, the reader, have a dozen more. People are smart enough to see and solve the problems, but that is not their job. Furthermore, they will basically get “shot down” or at best ignored if they try to fix the problem. So, I really don’t think the issue is that people are not “smart enough” to fix many of the problems we have individually.  It is that we design systems that make us collectively not very smart. (Of course, in outrageous cases, even some individual humans are so prideful that they cannot learn from honest feedback from others).

Now, you could say that such systems are themselves a proof that we are not smart enough. However, that is not a very good explanation. There are existence proofs of smarter organizations. The sad part is that they are exceptions rather than rules. In my experience, what keeps people from adopting better organizations; e.g., where people are empowered to understand and fix problems, are hubris and greed, not a lack of intelligence.

Firstly, in many situations, people believe that they already know everything they need in order to do their job. They certainly don’t want public feedback indicating that they are making mistakes (i.e., could improve) and this attitude spreads to their processing of private feedback. You can easily imagine a computer programmer saying, “I’ve been writing code for User Interfaces for thirty years! Now, you’re telling me I don’t know how?” Why can we imagine that so easily? Because the organizations that most of us live in are not organizations where learning to improve is stressed.

In many organizations, the rules, processes, and management structure make very little sense if the main goal is to make the organization as effective as possible. Instead, however, they make perfect sense if the main goal of the organization is to keep the people who have the most power and make the most money to keep having the most power and making the most money. In order to do that in an ongoing basis, it is true that the organization must be minimally competent. If they are a grocery store, they must sell groceries at some profit. If they are a software company, they need to produce some software. If they are a hotel, they can’t simply poison all their potential guests. But to stay in business, none of these organizations must do a stellar and ever-improving job. 

So, from my perspective, the reason that most organizations are not better learning organizations is not that we humans are not intelligent enough. The reason for marginally effective organizations is that the actual goal is mainly to keep people at the top in power. Greed is the biggest problem with people, not lack of intelligence. History shows us that such greed is ultimately self-defeating. Power corrupts all right, and eventually power erodes itself or explodes itself in revolution. But greedy people continue to believe that they can outsmart history. Dictators believe that they will not suffer the same fate as Hitler or Mussolini. CEO’s believe their bad deeds will go unpunished (indeed, often that’s true). So-called leaders often reject criticism by others and eventually spin out of control. That’s hubris.

I see no reason whatever to believe that AI systems, however intelligent, would be more than reflections of greed and hubris. It is theoretically possible to design AI systems without hubris and greed, but it is also quite possible to develop human beings where hubris and greed are not predominant factors in people’s motivation. We all know people who are eager to learn throughout life; who listen to others; who work collaboratively to solve problems; who give generously of their time and money and possessions. In fact, humans are generally very social animals and it is quite natural for us to worry more about our group, our tribe, our country, our family than our own little ego.  How much hubris and greed are in an AI system will very much depend on the nature and culture of the organization that builds it.

Next, let us consider what other flaws AI systems could have.

Author Page on Amazon

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • America
  • apocalypse
  • COVID-19
  • creativity
  • driverless cars
  • family
  • health
  • management
  • poetry
  • politics
  • psychology
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • Veritas

Meta

  • Register
  • Log in

Blog at WordPress.com.

Cancel

 
Loading Comments...
Comment
    ×