• About PeterSIronwood

petersironwood

~ Finding, formulating and solving life's frustrations.

petersironwood

Tag Archives: driverless cars

Essays on America: The Temperature Gauge

09 Thursday Jan 2020

Posted by petersironwood in America, apocalypse, driverless cars, politics, Uncategorized

≈ 4 Comments

Tags

AI, America, cancer, Democracy, driverless cars, ethics, government

green leafed trees

Photo by Drew Rae on Pexels.com

The sun is shining! Spring is here at last, and the trees are in bloom. You’re driving down the road and you see … 

That your “Engine over-heating” light goes on! 

You think: My engine’s over-heating! 

Or,  you think, it isn’t over-heating at all; I just have a bad sensor. 

Over the next few months, the red light goes on several other times, and each time, you pull over and try to judge whether the engine is really over-heated. No easy task. But you get back in and turn the car on and lo and behold, the light’s no longer on. Aloud, you mutter: “I’ve got to get that damned sensor fixed. Maybe next week.”

In the olden days of driving cars, I had a continuous gauge of the temperature. It was more obvious if it was acting oddly because I had more information. I could track it day to day. If I went on a long trip I could see whether the behavior of the gauge “made sense.” I might go up a long mountain road on a hot sunny day, and I expect to see the temperature gauge climb. On the other hand, if I went back down that same mountain at night and the temperature gauge climbed, I would know to get it checked. 

aerial view of road in the middle of trees

Photo by Deva Darshan on Pexels.com

Suppose instead of a gauge, you or I only get is one bit of information: “Temperature sensor says overheated,”  it’s much harder judge the veracity of the source. But, if we cannot even trust the reliability of the sensor, then we don’t even get one bit of information. Before the light comes on, there are four possible states (not equally likely, by the way, but that’s not important for the following argument). 

Engine OK, Sensor OK; 

Engine OK, Sensor ~OK; 

Engine ~OK, Sensor OK; 

Engine ~OK, Sensor ~OK. 

When the red light comes on, you have some information because the state of:

Engine OK, Sensor OK is eliminated. 

But is it? 

IMG_7209

It certainly is — under a certain set of assumptions — but let’s try to tease apart what those assumptions are and see whether they necessarily hold in today’s world, or in tomorrow’s world. 

Let’s imagine for a moment that your automobile is bewitched and inhabited by an evil demon with limited magical powers, mainly to do with the car itself. If you’ve seen the movie Christine you’ll know what I mean. If you haven’t seen it, please buy the book instead. It’s so much better. But let’s get back to our own evil-spirited car. This car, let’s call him “URUMPUT” because it sounds a bit like a car engine and because — you know, just because. Let’s imagine the car has a lot of mileage and is painted a kind of sickly orange color. The tires are bald, and it’s a real gas guzzler. It’s actually more of a jalopy than a car. Your friends would have assumed you could have done much better, but it is apparently what you’re stuck with for now. 

URUMPUT, unbeknownst to you, is actually out to kill you, but his powers are limited. He cannot simply lock the doors and reroute the exhaust till you pass out from the fumes. So, what it does is to over-ride the sensor so that you get out to take a look at your car so you open the hood and you look inside and BLAM! Down comes the hood on your head with enough force to snap your neck. When your neck is snapped, you don’t die instantaneously. You are aware that something is terribly wrong. Your brain sends signals for you to move; to get the damned hood off; but you can’t move. And, worse, you can’t breathe. Soon, but much too late, you realize something has gone terribly wrong.

You. 

Are. 

Dead! 

That blasted URUMPUT got you. Why?  Just because he could. He paid you no more mind than had you been an ant on the road. He gave you misinformation. That is information that you thought you had because you assumed you were dealing with a system that, although imperfect, had some degree of transparency. You certainly did not think you were dealing with an actively evil agent. But you were. And, now you’re dead. (But go ahead and read the rest as though you were still alive.) 

Of course, in real life, there are no bewitched cars. We all know that. 

86A389C7-4CD7-42E3-ABFA-A555A5BB24CB

Do we? 

Let’s consider how much electronics and “smarts” already exists in cars. The amount will skyrocket with driverless cars. For one thing, the human “occupants” will be able to have much more engaging entertainment. Perhaps more importantly, the “brain” of the car will be able to react to a much wider array of data more quickly than most human drivers could. 

With all the extra sensors, communications, components, functions, protocols, etc. there will be greatly enhanced functionality. 

There will also be all sorts of places where a “bad actor” might intentionally harm the vehicle or even harm the occupants. Your insurance company, for instance, might fake some of the data in the black box of your car to indicate that you drove a lot during nighttime hours. It doesn’t seem to match your recollection, but how would you double check? You grudgingly pay the increased premium. 

white graphing paper

Photo by Pixabay on Pexels.com

Behind on your loan shark payments? Oops? Your driverless car just steered itself off a cliff and all the occupants were killed. 

Oh, but how, you ask, would loan sharks get hold of the software in your car? 

Then, I have to ask you a question right back. Have you been watching the news the last couple of years? People who owe a great deal of money to the wrong people will do anything to avoid the promised punishments that follow non-payment. 

Our government at this point is definitely not much like old time cars that allowed you to see what was going on and make judgments for yourself. This government just sends out signals that say, “Everything’s Fine!” and “Do as I say!” and “Those people NOT like you? They are the cause of all your troubles.” 

D27C46AA-C37E-4AB7-8FE8-8DA937E31A91

That is not transparency. 

That is not even informational. 

That is misinformation. 

But it is not misinformation of the sort where a student says: “Akron is the capital of Ohio.” That’s wrong, but it’s not maliciously wrong. 

When people lose a limb as a result of an accident, cancer, or war, they often experience something called the “Phantom Limb Experience.” They have distinct sensations, including pain, “in” the limb that is no longer there. The engine’s not working but the sensor is also bad. 

That’s where we are. 

The engine’s not working. The feedback to us about whether it’s working is also malicious misinformation. 

We have the Phantom Limb Experience of having a government that is working for American interests. 

We need to regrow the missing limb or get a really good prosthetic. 

We need straight information from the government which is supposed to take input from all of us and then make decisions for all of us. It’s never been perfect, but this is the first time it is not even trying or pretending to be fair or even accurate. People in top level positions in our government think that their oath of office is a joke. 

We live in a monster car — and not the fun kind — the Christine kind. 

The engine’s not working. And the sensor light means nothing. If you look under the hood to find out what’s really going on, you’d better have a partner ready to grab the hood and prevent it from being slammed down on your head. Because URUMPUT would do it with as little regard for you as he would have to out and destroy any other whistleblower. 

blur close up design focus

Photo by Pixabay on Pexels.com

———————————————

The Invisibility Cloak of Habit

Author Page on Amazon

Story about Driverless Cars (from Turing’s Nightmares). 

A Once-Baked Potato

28 Saturday Sep 2019

Posted by petersironwood in America, driverless cars, politics, psychology

≈ 7 Comments

Tags

AI, automation, driverless cars, life, politics, truth

A Once-Baked Potato 

closeup photo of potatoes

Photo by Pixabay on Pexels.com

I’m really not ready to go for a long, high speed trip in a completely automated car. 

empty concrete road near trees

Photo by Alec Herrera on Pexels.com

I say that because of my baked potatoes. One for me. One for my wife. 

I’ve done it many times before. Here is my typical process. I take out a variety of vegetables to chop and chop the broccoli, red onion, garlic, red pepper while the potatoes are in the microwave. I put them in for some time like: 4:32 minutes and then, when that times out, I “test” the potatoes with a fork and put them in for more time. Actually, before I even take them out to use the “fork test” I shake the potatoes. I can tell from the “feel” whether they are still rock hard. If they are marginal, then, I use the more sensitive “fork test.”  Meanwhile, I chop more vegetables and take out the cheese. I test the potatoes again. At some point, they are well done and I slather them up with butter and cheese and then add the chopped Vegetables. 

food healthy vegetables kitchen

Photo by Pixabay on Pexels.com

Delicious. 

But today is different. 

I pushed a button on the microwave that says, “Baked Potato.” Right away, I think: “Baked potato? I’m not putting in a baked potato. I’m putting in a raw potato. You have a button labelled ‘Popcorn’ — it doesn’t say, ‘Popped Corn’ so … ? Anyway, I decided to give it a try. 

The first disadvantage I see is that I have no idea whatsoever how long this process is going to take. I assume it has to take at least four and a half minutes. When I cook it via my usual process, it’s on “high” or “full power.” So, unless the microwave has a “hidden” higher power level that it allows its internal programs to have access to but not its end users, it seems I have at least 4 1/3 minutes to chop. 

Changing the way you do things always causes a little bit of discomfort, though often, a feeling of adventure outweighs that cautionary urge. In this case, I felt a lot of discomfort. The microwave can’t feel how done the potato is so it must be using some other sensor or sensors — likely moisture — though there may be other ways to do it. How do I know that the correlation between how I measure “doneness” and how the microwave measures “doneness” is even moderate? I am also a little concerned that there are, after all, two potatoes, not just one. There was no way to tell the machine that I had two potatoes. I decided that it was likely that the technical problems had been solved. 

Why? Certainly not because I have great faith in large multinational corporations to “do what’s right” rather than do what’s expedient. Once upon a time, not so many years ago, that really was my default assumption. But no longer. Too many lies by too many corporations about too many separate topics. Once upon a time, the government held some power to hold corporations accountable for their actions. Now, the power seems to have shifted so that many politicians — too many — are beholden to their corporate owners.  

The corporation just tries to work for its self-interests. They aren’t very good at it, but that’s their goal. 

Among the common ways they fail is by being too conservative. If they are successful by doing things a certain way, they often keep at it despite changes in the technology, the markets, the cost structures, the distribution possibilities, etc. (They are too afraid to push the “Baked Potato” button). At the same time, there seems to be no evil that many of them would foreswear in order to grow their profits; no lie that is too prosperous for them to tell. 

black and grey camera

Photo by Alex Andrews on Pexels.com

Yet, I live, at least for now, in this world surrounded by products made by these companies and interacting with them all the time. I cannot trust them as a whole, but it’s almost impossible not to rely on some of them some of the time. They can’t fool all of the people all of the time. 

I do calculate that if they put these buttons on there and they were horrible, word would get around and they would lose market share. This presumes that there is real competition in the market. 

I think it likely that driverless cars will be “safer” than human drivers on average within ten years, and possibly sooner. My discomfort stems, again, partly from habit, but largely from a lack of confidence in the ethics of corporations. Normally, I would think that when it comes to life and death, at least, I can put some degree of faith in the government to oversee these companies enough to ensure their safety data were accurate. 

But I no longer believe that. And even after Trump resigns or gets impeached & convicted or he flees to Russia, there is no way to know how deeply and pervasively this corrupt misadministration has crept into the ethics of lesser government officials.  Any government official might think: “after all, if the President is flouting the Constitution by using the power of his office for his own benefit, why shouldn’t I? I need a bribe just as much as the next person and I certainly need the money more than Trump did!”

pexels-photo-164527.jpeg

Photo by Pixabay on Pexels.com

Beep. Beep. 

The microwave claims the potatoes are done. 

And so they are. Perfectly. 

There is still hope for America. 

IMG_7590

Maybe I will be able to take that ride after all. 


 

Author Page on Amazon. 

Corn on the Cob

Parametric Recipes and American Democracy 

Pies on Offer

Garlic Cloves and Puffer Fish

The Pros and Cons of AI: Part One

 

Basically Unfair is Basically Unsafe

05 Tuesday Apr 2016

Posted by petersironwood in apocalypse, The Singularity, Uncategorized

≈ Leave a comment

Tags

AI, Artificial Intelligence, cognitive computing, driverless cars, Robotics, the singularity, Turing

 

IMG_5572.JPG

In Chapter Eleven of Turing’s Nightmares, a family is attempting to escape from impending doom via a driverless car. The car operates by a set of complex rules, each of which seems quite reasonable in and of itself and under most circumstances. The net result however, is probably not quite what the designers envisioned. The underlying issue is not so much a problem with driverless cars, robotics or artificial intelligence. The underlying issue has more to do with the very tricky issue of separating problem from context. In designing any complex system, regardless of what technology is involved, people generally begin by taking some conditions as “given” and others as “things to be changed.” The complex problem is then separated into sub-problems. If each of the subproblems is well-solved, the implicit theory is that the overall problem will be solved as well. The tricky part is separating what we consider “problem” from “context” and separating the overall problem into relatively independent sub-problems.

Dave Snowden tells an interesting story from his days consulting for the National Water Service in the UK. The Water Service included in its employ engineers to fix problems and dispatchers who answered phones and dispatched engineers to fix those problems. Engineers were incented to solve problems while dispatchers were measured by how many calls they handled in a day. Most of the dispatchers were young but one of the older dispatchers was considerably slower than most. She only handled about half the number of calls she was “supposed to.” She was nearly fired. As it turned out, her husband was an engineer in the Water Service. She knew a lot and her phone calls ended up resulting in an engineer being dispatched about 1/1000 of the time while the “fast” dispatchers sent engineers to fix problems about 1/10 of the time. What was happening? Because the older employee knew a lot about the typical problems, she was actually solving many of them on the phone. She was saving her company a lot of money and was almost fired for it. Think about that. She was saving her company a lot of money and was almost fired for it.

In my dissertation, I compared the behavior of people solving a river-crossing problem to the behavior of the “General Problem Solver” — an early AI program developed by Shaw, Newell and Simon at Carnegie-Mellon University. One of the many differences was that people behave “opportunistically” compared with the General Problem Solver of the time. Although the original authors of GPS felt that its recursive nature was a feature, Quinlan and Hunt showed that there was a class of problems on which their non-recursive system (Fortran Deductive System) was superior.

Imagine, for example, that you wanted to read a new book (e.g., Turing’s Nightmare). In order to read the book, you will need to have the book so your sub-goal becomes to purchase the book; that is your goal. In order to meet that goal, you realize you will need to get $50 in cash. Now, getting $50 in cash becomes your goal. You decide that to meet that goal, you could volunteer to shovel the snow from your uncle’s driveway. On the way out the door, you mention your entire goal structure to your roommate because you need to borrow their car to drive to your uncle’s house. They say that they have already purchased the book and you are welcome to borrow it. The original GPS, at this point, would have solved the book reading problem by solving the book purchasing problem by solving the getting cash problem by going to your uncle’s house by borrowing your roommate’s car! You, on the other hand, like most individual human beings, would simply borrow your roommate’s copy and curl up in a nice warm easy chair to read the book. However, when people develop bureaucracies, whether business, academic, or governmental, these bureaucracies may well have spawned different departments, each with its own measures and goals. Such bureaucracies might well end up going through the whole chain in order to “solve the problem.”

Similarly, when groups of people design complex systems, the various parts of the system are generally designed and built by different groups of people. If these people are co-located, and if there is a high degree of trust, and if people are not micro-managed, and if there is time, space, and incentive for people to communicate even when it is not directly in the service of their own deadlines, the design group will tend to “do the right thing” and operate intelligently. To the extent, however, that companies have “cut the fat” and discourage “time-wasting” activities like socializing with co-workers and “saving money” by outsourcing huge chunks of the designing and building process, you will be lucky if the net result is as “intelligent” as the original General Problem Solving system.

Most readers will have experienced exactly this kind of bureaucratic nonsense when encountering a “good employee” who has no power or incentive to do anything but follow a set of rules that they have been warned to follow regardless of the actual result for the customer. At bottom then, the root cause of problems illustrated in chapter ten is not “Artificial Intelligence” or “Robotics” or “Driverless Cars.” The root issue is what might be called “Deep Greed.” The people at the very top of companies squeeze every “spare drop” of productivity from workers thus making choices that are globally intelligent nearly impossible due to a lack of knowledge and lack of incentive. This is combined with what might be called “Deep Hubris” — the idea that all contingencies have been accounted for and that there is no need for feedback, adaptation, or work-arounds.

Here is a simple example that I personally ran into, but readers will surely have many of their own examples. I was filling out an on-line form that asked me to list the universities and colleges I attended. Fair enough, but instead of having me type in the institutions, they designers used a pull-down list! There are somewhere between 4000 and 7500 post high-school institutions in the USA and around 50,000 world wide. The mere fact that the exact number is so hard to pin down should give designers pause. Naturally, for most UIs and most computer users, it is much faster to type in the name than scroll to it. Of course, the list keeps changing too. Moreover, there is ambiguity as to where an item should appear in the alphabetical list. For example, my institution, The University of Michigan, could conceivably be listed as “U of M”, “University of Michigan”, “Michigan”, “The University of Michigan”, or “U of Michigan.” As it turns out, it isn’t listed at all. That’s right. Over 43,000 students were enrolled last year at Michigan and it isn’t even on the list at least so far as I could determine in any way. That might not be so bad, but the form does not allow the user to type in anything. In other words, despite the fact that the category “colleges and universities” is ever-changing, a bit fuzzy, and suffers from naming ambiguity, the designers were so confident of their list being perfect that they saw no need for allowing users to communicate in any way that there was an error in the design. If one tries to communicate “out of band”, one is led to a FAQ page and ultimately a form to fill out. The form presumes that all errors are due to user errors and that all of these user errors are again from a small class of pre-defined errors! That’s right! You guessed it! The “report a problem” form again presumes that every problem that exists in the real world has already been anticipated by the designers. Sigh.

So, to me, the idea that Frank and Katie and Roger would end up as they did does not seem the least bit far-fetched. As I mentioned, the problem is not with “artificial intelligence.” The problem is not even that our society is structured as a hierarchy of greed. In the hierarchy of greed, everyone keeps their place because they are motivated to get just a little more by following the rules they are given from above and keeping everyone below them in line following their rules. It is not a system of involuntary servitude (for most) but a system of voluntary servitude. It seems to the people at each level that they can “do better” in terms of financial rewards or power or prestige by sifting just a little more from those below. To me, this can be likened to the game of Jenga™. In this game, there is a high stack of rectangular blocks. Players take turns removing blocks. At some point, of course, what is left of the tower collapses and one player loses. However, if our society collapses from deep greed combined with deep hubris, everyone loses.

Newell, A.; Shaw, J.C.; Simon, H.A. (1959). Report on a general problem-solving program. Proceedings of the International Conference on Information Processing. pp. 256–264.

J.R. Quinlan & E.B. Hunt (1968). A Formal Deductive Problem-Solving System, Journal of the ACM 10/1968; 15(4):625-646. DOI: 10.1145/321479.321487

Thomas, J.C. (1974). An analysis of behavior in the hobbits-orcs problem. Cognitive Psychology 6 , pp. 257-269.

Turing’s Nightmares

Subscribe

  • Entries (RSS)
  • Comments (RSS)

Archives

  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • May 2015
  • January 2015
  • July 2014
  • January 2014
  • December 2013
  • November 2013

Categories

  • America
  • apocalypse
  • COVID-19
  • creativity
  • driverless cars
  • family
  • health
  • management
  • poetry
  • politics
  • psychology
  • science
  • sports
  • story
  • The Singularity
  • Travel
  • Uncategorized
  • Veritas

Meta

  • Register
  • Log in

Blog at WordPress.com.

Cancel