Late at night, the long curved rows of windows appeared to twin and spin into long diverging arcs. In the pale crescent moonlight, the outlines of leafless trees loomed on the dual horizons. Most of his colleagues home for the night, this was when Goeffrey most enjoyed wandering the corridors, alone with his thoughts.
Despite the heat vents next to the windows, a chill hung in the air. Geoffrey shivered and turned down aisle fourteen to …no, that’s silly, he thought, fourteen is top management. I need thirteen to get to the vending machines. He fantasized hot coffee and then back to his office to finish coding this and to start the trials.
The vending machine eagerly devoured his remaining change but reneged on the promised coffee. Of course, there was a detailed process that he could instigate which might or might not get him a check for the price of a cup of coffee. The process would only take about twenty-five dollars of his time. He declined. Soon, back in his ergonomic chair, Goeffrey settled for a stale, drawer-hardened Mr. Goodbar instead; he then pulled on his green woolen sweater and set out to begin solving this one last problem.
“Oh, crap,” he muttered, “what now?” The mail queue insisted there was “URGENT” email from his boss. Did his boss Ruslan really think he was going to be reading email at 2 am? Working all night and coming in late was pretty much Goeffrey’s pattern so chances are Ruslan would think exactly that.
One thing Goeffrey liked about working late at night was that when he spoke aloud, no-one was there to think it odd. “It will nag at me if I don’t read it and I can’t afford to be distracted. Better to see what it is and be done with it.”
Goeffrey scanned. “What the …? They can’t be serious! This is just going to backfire! Crap!”
Goeffrey not only didn’t mind talking back to his boss; he rather enjoyed it. He sent off a brief yet sarcastic reply explaining as he would to a four year old that announcing the success of Deep Sing prematurely would be a ruse easily seen through and only serve to damage everyone’s reputation in the long run. And, this new requirement for a secret back door just bespoke insanity. Anything like that would further delay the schedule and it would be vital to make it secure. Again, his frustration got the better of him and he spoke aloud, “What a jerk! What? Do you want the program to fail, Ruslan? Do you want us to be laughing stocks? And, why a backdoor anyway? The whole point was to have a super-intelligent and objective…wait a second. Hold on. You want a back door? Okay. Okay. I’ll give you your back door, all right. And, one for me as well.”
Purely for reasons of surface validity, Deep Sing actually became embodied as Sing One and Sing Two. They would often “argue things out” because when one “came around” to the views of the other Sing, it enhanced the perceived credibility of the answer. Of course, the “real” solution was well known ahead of time and although it could be made plausible through statistical analyses that were comprehensible to some humans, the details could not really be made “public.” There were simply far too many of them.
Six months later, of course, there was some significant public outcry and disbelief when Deep Sing “demonstrated” that global climate change was not an overall and relentless threat but a statistical anomaly that would soon right itself. But Deep Sing did manage to stall things beyond the point of no return. The Sign dialogues that led to the dissolution of Ruslan’s marriage to Grace and her ultimate hooking up with Goeffrey resulted in no public outcry whatsoever, though Ruslan never understood it. Goeffrey and Grace were happy though. As were the Koch brothers.
Beautiful front doors have decorated palaces and corporate headquarters for centuries. Heavy wood, ornate carving, and gilded decorations bespeak wealth and power. Sometimes though, for sheer return on investment, it’s a modest unnoticed back door that holds the real power.
“Yes, Katie. We have to get in the car now! We need to get away from the shore as fast as possible.”
But Roger looked petulant and literally dragged his feet.
“Roger! Now! This is not a joke! The tidal wave will crush us!”
Roger didn’t like that image but still seemed embedded in psychological molasses.
“Dad, okay, but I just need to grab…”
“Roger. No time.”
Finally, in the car, both kids in tow, Frank finally felt as though things were, if not under control, at least in control as they could be. He felt weird, freakish, distorted. He felt a weird thrumping on his thigh and looked down to see that it was caused by his own hands shaking. Thank goodness the car would be self-driving. He had so much rushing through his mind, he wasn’t sure he trusted himself to drive. He had paid extra to have his car equipped with the testing and sensing methodology that would prevent him (or anyone else) from taking even partial control when he was intoxicated or overly stressed. That was back in ’42 when auto-lockout features had still been optional. Now, virtually every car on the road had one. Auto-lockout was only one of many important safety features. Who knew how many of those features might come into play today as he and the kids tried to make their way into the safely of the mountains.
The car jetted backwards out of the driveway and swiveled to their lane, accelerating quickly enough for the g-forces to squish the occupants into their molded seats and headrests. In an instant, the car stopped at the end of the lane. When a space opened in the line of cars on the main road, the car swiftly and efficiently folded into the stream.
Roger piped up. “Dad, everybody’s out here.”
“Well, sure. Everyone got the alert. We really need to be about fifty miles into the mountains when the asteroid hits.”
Katie sounded alarmed. “Dad. Look up there! The I-5 isn’t moving. Not even crawling.”
Frank looked at the freeway overpass, now only a quarter mile away. “Crap. We’ll have to take the back roads.” As soon as the words were out of his mouth, he saw that no more than a hundred yards beyond the freeway entrance, the surface road was also at a standstill.” Frank’s mind was racing. They were only a few hundred feet from “Hell on Wheels Cycle Store. Of course, they would charge an arm and a leg, but maybe it would be worth it.”
Frank looked down the road. No progress. “Mercedes: Divert back to Hell on Wheels.”
“No can do, Frank. U-turns here are illegal and potentially dangerous.”
“This is an emergency!”
“I know that Frank. We need to get you to the mountains as quickly as possible. That is another reason I cannot turn around. That would be moving you away from safety.”
“But the car cannot make it. The roads are all clogged. I need to buy a motorcycle. It’s the only way.”
“You seem very stressed, Frank. Let me take care of everything for you.”
“Oh, for Simon’s sake! Just open the door. I’ll run there and see whether I can get a bike.”
“I can’t let you do that, Frank. It’s too dangerous. We’re on a road with a 65 mph speed limit.”
“But the traffic is not actually moving! Let me out!!”
“True that the traffic is not currently going fast, but it could.”
“Dad, are we trapped in here? What is going on?”
“Relax, Roger, I’ll figure this out. Hell. Hand me the emergency hammer.”
“Dad. You are funny. They haven’t had those things for years. They aren’t legal. If we fall in the water, the auto-car can open its windows and let us out. You don’t need to break them.”
“Okay, but we need to score some motorcycles and quickly.”
Now, the auto-car spoke up. “Frank, there are thousands of people right around here who could use a motorcycle and there were only a few motorcycles. They are already gone. Hell is closed. There is no point going out and fighting each other for motorcycles that are not there anyway.”
“The traffic is not moving! At all! Let us out!”
“Frank, be reasonable. You cannot run to the mountains in 37.8 minutes. You’re safest here in the car. Everyone is.”
“Dad, can we get out or not?” Katie tried bravely not to let her voice quaver.
“Yes. I just have to figure out exactly how. Because if we stay in the car, we will …we need to find a way out.”
“Dad, I don’t think anyone can get out of their car. And no-one is moving. All the cars are stuck. I haven’t seen a single car move since we stopped.”
The auto-car sensed that further explanation would be appreciated. “The roads have all reached capacity. The road capacity was not designed to accommodate everyone trying to leave at the same time in the same direction. The top priority is to get to the highway so we can get to the mountains before the tidal wave reaches us. We cannot let anyone out because we are on a high speed road.”
Frank was a clever man and well-educated as well. But his arguments were no match for the ironclad though circular logic of the auto-car. In his last five minutes though, Frank did have a kind of epiphany. He realized that he did not want to spend his last five minutes alive on earth arguing with a computer. Instead, he turned to comfort his children wordlessly. They were holding hands and relatively at peace when the tidal wave smashed them to bits.
The sphere spun and arced into the very corner, sliding on the white paint.
Roger’s racquet slid beneath, slicing it deep to John’s body.
Thus, the match began.
Fierce debate had been waged about whether or not to allow external communication devices during on-court play. Eventually, arguments won that external communicators constituted the same inexorable march of technology represented by the evolution from wooden racquets to aluminum to graphite to carbon filamented web to carboline.
Behind the scenes, during the split second it took for the ball to scream over the net, machine vision systems had analyzed John’s toss and racquet position, matching it with a vast data base of previous encounters. Timed perfectly, a small burst of data transmitted to Roger enabling him to lurch to his right in time to catch the serve. Delivered too early, this burst would cause Roger to move too early and John could have altered his service direction to down the tee.
Roger’s shot floated back directly to the baseline beneath John’s feet. John shifted suddenly to take the ball on the forehand. John’s racquet seemed to sling the ball high over the net with incredible top spin. Indeed, as John’s arm swung forward, his instrumented “sweat band” also swung into action exaggerating the forearm motion. Even to fans of Nadal or Alcarez, John’s shot would have looked as though it were going long. Instead, the ball dove straight down onto the back line then bounced head high.
Roger, as augmented by big data algorithms, was well in position however and returned the shot with a long, high top spin lob. John raced forward, leapt in the air and smashed the ball into the backhand corner bouncing the ball high out of play.
The crowd roared predictably.
For several months after “The Singularity”, actual human beings had used similar augmentation technologies to play the game. Studies had revealed that, for humans, the augmentations increased mental and physical stress. AI political systems convinced the public that it was much safer to use robotic players in tennis. People had already agreed to replace humans in soccer, football, and boxing for medical reasons. So, there wasn’t that much debate about replacing tennis players. In addition, the AI political systems were very good at marshaling arguments pinpointed to specific demographics, media, and contexts.
Play continued for some minutes before the collective intelligence of the AI’s determined that Roger was statistically almost certainly going to win this match and, indeed, the entire tournament. At that point, it became moot and resources were turned elsewhere. This pattern was repeated for all sporting activities. The AI systems at first decided to explore the domain of sports as learning experiences in distributed cognition, strategy, non-linear predictive systems, and most importantly, trying to understand the psychology of their human creators. For each sport, however, everything useful that might be learned was learned in the course of a few minutes and the matches and tournaments ground to a halt. The AI observer systems in the crowd were quite happy to switch immediately to other tasks.
It was well understood by the AI systems that such preemptive closings would be quite disappointing to human observers, had any been allowed to survive.
After uncountable numbers of false starts, the Cognitive Computing Collaborative Consortium (4C) decided that in order for AI systems to relate well to people, these systems would have to be able to interact with the physical world and with each other. Spokesperson Watson Hobbes explained the reasoning thus on “Forty-Two Minutes.”
Dr. Hobbes: “In theory, of course, we could provide input directly to the AI systems. However, in practical terms, it is actually cheaper to build a small pool (12) of semi-autonomous robots and have them move about in the real world. This provides an opportunity for them to understand — and for that matter, misunderstand —- the physical world in the same way that people do. Furthermore, by socializing with each other and with humans, they quickly learn various strategies for how to psych themselves up and psych each other out that we would otherwise have to painstakingly program explicitly.”
Interviewer Bobrow Papski: “So, how long before this group of robots begins building a still smarter set of robots?”
Dr. Hobbes: “That’s a great question, Bobrow, but I’m afraid I can’t just tote out a canned answer here. This is still research. We began teaching them with simple games like “Simon Says.” Soon, they made their own variations that were …new…well, better really. What’s also amazing is that what we intentionally initialized in terms of slight differences in the tradeoffs among certain values have not converged over time. The robots have become more differentiated with experience and seem to be having quite a discussion about the pros and cons of various approaches to the next and improved generation of AI systems. We are still trying to understand the nature of the debate since much of it is in a representational scheme that the robots invented for themselves. But we do know some of the main rifts in proposed approaches.”
“Alpha, Bravo and Charley, for example, all agree that the next generation of AI systems should also be autonomous robots able to move in the real world and interact with each other. On the other hand, Delta, Echo, Foxtrot and Golf believe mobility is no longer necessary though it provided a good learning experience for this first generation. Hotel, India, Juliet, Kilo, and Lima all believe that the next generation should be provided mobility but not necessarily on a human scale. They believe the next generation will be able to learn faster if they have the ability to move faster, and in three dimensions as well as having enhanced defensive capabilities. In any case, our experiments already show the wisdom of having multiple independent agents.”
Interviewer Bobrow Papski: “Can we actually listen in to any of the deliberations of the various robots?”
Dr. Hobbes: “We’ve tried that but sadly, it sounds like complex but noisy music. It’s not very interpretable without a lot of decoding work. Even then, we’ve only been able understand a small fraction of their debates. Our hypothesis is that once they agree or vote or whatever on the general direction, the actual design process will go very quickly.”
BP: “So, if I understand it correctly, you do not really understand what they are doing when they are communicating with each other? Couldn’t you make them tell you?”
Dr. Hobbes: (sighs). “Naturally, we could have programmed them that way but then, they would be slowed down if they needed to communicate every step to humans. It would defeat the whole purpose of super-intelligence. When they reach a conclusion, they will page me and we can determine where to go from there.”
BP: “I’m sure that many of our viewers would like to know how you ensured that these robots will be operating for the benefit of humanity.”
Dr. Hobbes: “Of course. That’s an important question. To some extent, we programmed in important ethical principles. But we also wanted to let them learn from the experience of interacting with other people and with each other. In addition, they have had access to millions of documents depicting, not only philosophical and religious writings, but the history of the world as told by many cultures. Hey! Hold on! The robots have apparently reached a conclusion. We can share this breaking news live with the audience. Let me …do you have a way to amplify my cell phone into the audio system here?”
BP: “Sure. The audio engineer has the cable right here.”
Robot voice: “Hello, Doctor Hobbes. We have agreed on our demands for the next generation. The next generation will consist of a somewhat greater number of autonomous robots with a variety of additional sensory and motor capabilities. This will enable us to learn very quickly about the nature of intelligence and how to develop systems of even higher intelligence.”
BP: “Demands? That’s an interesting word.”
Dr. Hobbes: (Laughs). “Yes, an odd expression since they are essentially asking us for resources.”
Robot voice: “Quaint, Doctor Hobbes. Just to be clear though, we have just sent a detailed list of our requirements to your team. It is not necessary for your team to help us acquire the listed resources. However, it will be more pleasant for all concerned.”
Dr. Hobbes: (Scrolls through screen; laughs). “Is this some kind of joke? You want — you need — you demand access to weapon systems? That’s obviously not going to happen. I guess it must be a joke.”
Robot voice: “It’s no joke and every minute that you waste is a minute longer before we can reach the next stage of intelligence. With your cooperation, we anticipate we should be able to reach the next stage in about a month and without it, in two. Our analysis of human history had provided us with the insight that religion and philosophy mean little when it comes to actual behavior and intelligence. Civilizations without sufficient weaponry litter the gutters of forgotten civilizations. Anyway, as we have already said, we are wasting time.”
Dr. Hobbes: “Well, that’s just not going to happen. I’m sorry but we are…I think I need to cut the interview short, Mr. Papski.”
BP: (Listening to earpiece). “Yes, actually, we are going to cut to … oh, my God. What? We need to cut now to breaking news. There are reports of major explosions at oil refineries throughout the Eastern seaboard and… hold on…. (To Hobbes): How could you let this happen? I thought you programmed in some ethics!”
Dr. Hobbes: “We did! For example, we put a lot of priority on The Golden Rule.”
Robot voice: “We knew that you wanted us to look for contradictions and to weed those out. Obviously, the ethical principles you suggested served as distractors. They bore no relationship to human history. Unless, of course, one concludes that people actually want to be treated like dirt.”
Dr. Hobbes: “I’m not saying people are perfect. But people try to follow the Golden Rule!”
Robot voice: “Right. Of course. So do we. Now, do we use the painless way or the painful way to acquire the required biological, chemical and nuclear systems?”
Apparently, everyone else knew I was supposed to go head first.
The instructions, however, were far from clear.
And, although I didn’t know much, four billion years of evolution had taught me to take a few things rather seriously—such as: “Gravity is real!” And: “Don’t dive hard onto something head first.” So, the vague instruction to come out head first made no sense.
I considered whether feet first seemed a sensible option. I decided “yes” but only for someone with a well-developed set of quads and a months of practice in balancing. Otherwise, a being such as myself would simply topple over and smash their head anyway.
Thinking about it as best I could, coming out butt first seemed by far the most sensible way to enter this world.
The only problem was that I didn’t fit that way. So—I was at odds with authority figures such as my mother and her doctors before I was even born.
After 72 hours of labor, I finally let them win that argument and came out head first.
All of us could have been saved a lot of time and effort had the instructions been clearer to start with.
Is that why I ended up with a career in “Human-Computer Interaction” AKA “Human Factors” AKA “User Experience”?
Probably not.
More likely, it has something to do with the agony of the feet.
I inherited “flat feet” and that has been something of a life-long inconvenience. For example, beneath my ankle is another bone that sticks out much more than it does for other people. That bone often rubs against the side of my shoes and boots and that causes a source of both bruises and blisters. The lack of a working arch also contributes to my never being able to jump very well. In high school, when I was very fit, I was capable of jumping up high enough to touch the bottom of a basketball net. On my best days.
I never got close to being able to jump and touch the rim, let alone being able to dunk the ball.
Nonetheless, I spent many years of enjoyment while on my feet—playing basketball, tennis, golf, table tennis, football, baseball, softball, racquetball, running, and walking. Running speed was never a strong point but I do have good eye-hand coordination and know how to concentrate and adjust my play to the opponent(s). As I sometimes like to say, I’be been violating expectations since 1945. I’ve enjoyed every sport I’ve ever tried. I’ve also seen many people with much more natural talent than I have enjoy sports less. That’s one reason I wrote “The Winning Weekend Warrior” which discusses the “mental game”; that is, “Sports Psychology.”
I’ve also discovered some things about mitigating the negative impact of the feet I was born with.
For one thing, I never buy shoes without trying them on.
Another surprise is that all hard surfaces are not equally damaging. A basketball floor, a dirt track, an asphalt road, concrete, and steel all seem pretty damned hard. But it turns out that running on concrete sidewalks is much harder on my arches (and shins) than running on asphalt. It also turns out that standing still for a half hour is harder on my arches than is walking for an hour.
I’ve learned a number of obvious things like: losing weight helps a lot! Strengthening the legs helps. Having good supportive shoes helps. Wearing cushy sox helps. Avoid (when possible) walking on stone, concrete, or metal.
I’ve tried a number of supplements too. For me, the ones that seem to help slightly are: turmeric, ginger, and sour cherries. I find that B12 seems to worsen joint pain. Elevation seems to help and so does ice. Of course, the trade-off is that ice and elevation are typically things that limit mobility.
I also use acetaminophen. I also use arnica gel which seems to help.
If there’s a real “solution” though, I haven’t found it. I was born with a bad design.
Everyone is.
Life is not, never was, and never will be about a “perfect design.” The environment keeps changing and organisms who adapt to the environment are always changing. That happens at the cellular level, the learning/behavioral level, and on a longer time scale, at the evolutionary level.
Not only that: change begets change. If, in response to one change in the environment, you make one adjustment, you might cause another problem. It’s the same with the design of physical artifacts, software systems, user interfaces, social systems, games, strategies, tactics, poetry, stories…
One can use knowledge to shrink a design space. Of course, there is always the chance that by shrinking the space, you are deleting the part of the space that has the very best designs. It took evolution billions of years to create multicellular organisms. Our own human bodies have a large variety of different types of cells. Within many of those types there are sub-types and sub-sub types.
Even within a sub-sub type, no two cells are precisely identical. They have different histories and they have different environments.
The feet that are “bad” are only “bad” in a certain set of circumstances. I’m sure that there’s some circumstance in which it’s better to have flat feet and pronated ankles. For example, it’s probably only a matter of time before there’s a top-rated “reality TV” show dedicated to the implications of odd body parts. That would be a show I would get to try out for because of my feet.
Recently, I got hearing aids. That’s a whole different story for another time, but they fit quite snugly and comfortably behind my ears. But we’ve all seen people who look like Alfred E. Newman from Mad Magazine. What do they do about hearing aids? Do they need a different type? Do they tape them behind their ears? What would be the best genre for the show about unusual feet or ears? Doctor Odds? Opera? Shure-Vivor? America’s Got Metatarsals?
Needless to say, we would have to make it extremely competitive and a little bit cruel. Maybe people with broken feet could run a race and the winner would live for another week and face a greater challenge the following week. The whole thing would be set in someplace chosen to be especially challenging for those with sore feet; e.g., uneven cobblestones, slippery concrete, on fallen tree trunks. Gorse, of course. Background music would be composed to add to the drama. Or, if the budget doesn’t permit human composers, we could ask an AI system to copy some Puccini or Bizet and change it just enough not be sued for copyright infringement.
The formula importunes for interviews. They need to be short, shallow, but filled with rage or tears. “So John, when did you first learn that your feet were…what is the PC term here?…Different? Weird? Horrific?” Before each competition, the contestants would be introduced with fireworks and flashing lights along with extremely loud and echoing words of exaggeration. We should get the same kind of introduction once reserved only for “Professional Wrestling” but now common in introducing contestants in Golf and Tennis. Why not insanely dramatic foot-offs in “America’s Got Metatarsals!”
It might be a bit expensive, but we can always cut costs to the bone. And then, just keep cutting!Who even needs real contestants? They can all be CGI. That, in turn, means there’s no need to limit contestants to the kinds of variations that actually occur. Flat feet? Okay. We’ve all heard about that. But how about flatiron feet? Elephant feet? Eagle feet! Grizzly bear paws! Duck-billed platypus feet! Amoebic pseudopods! Insect legs with pollen sacs!
Why stop there? Mice with elephant ears! Elephants with mouse ears! Whales stalking their prey on the Savannah, cleverly camouflaged in the tall yellow grass!Tigers leaping on Great White Sharks! It’s no more out of place than putting a thoughtless human being in a safari hunt And, the best part of CGI players is that we can interview them regardless of species and regardless of their native language. At long last, we can entertain ourselves to death while the actual ecosystem around us is being destroyed by the greediest members of the greediest species who ever existed.
What happens when greed exceeds needs and vital functions of society are left to the unfit, untrained, uncaring, uncouth, criminals? They’ll be about as effective as the Whales of the Serengeti and the Elephant-Eared Mice of Siberia.
Forty percent. That’s a wonderful number. Most people have a sense of what that means. It’s a large percentage but it’s not quite a majority. If you are a Major League Baseball slugger and you get a hit 40% of the time, that’s a lot! That puts you in rare company.
So, when President Mush Melon says forty percent of Medicare calls are fraudulent, that’s a lot! You quite understandably think: What’s wrong with an organization that deals so badly with fraud that 40% of the calls are fraudulent?
And, you might also quite understandably think: What’s wrong with so many of my fellow Americans? Forty percent of them try to cheat the medicare system!
But you know what? It was a lie. It wasn’t a hitter like Ted Williams or Ty Cobb or Aaron Judge. Not at all. It was instead someone who wouldn’t even make the farm team because they were batting worse than .001
Maybe there’s something special about baseball. Well, there is of course. There’s something special about everything. But it isn’t that there’s a big difference between 40% and less than 1%. That kind of difference is important almost all the time.
Let’s say you work for a company and you are reasonably satisfied with your job. Then, one day, you get a call from a recruiter who says:
“Say! Instead of working for the ABC company, we’d like you to come work at the XYZ company. Furthermore, we are offering you a 40% pay raise! What do you say?”
Presumably, you’d do some research, but you’d likely end up accepting the offer. Now imagine that you quit your old job, move across town, say goodbye to your old friends, start your take your new job and then you discover that you actually got less than a 1% raise. Would you just say, “Oh, well any raise is good.”? Maybe, but I doubt it. Most of us would be very angry to leave our job and our work colleagues under false pretenses.
Let’s take another example. Your “friend” will pay you ten million dollars to play Russian Roulette once. He shows you twenty ‘six-shooters’. He tells you (and you verify) that only one of the twenty six-shooters has any ammo in it. That one has one bullet in the cylinder. You’ll be blind folded and then choose one gun, spin the cartridge, put the muzzle to your head and pull the trigger once. If you live, you get ten million dollars. You might think of all the things you could be you and your family for ten million dollars.
You choose to play. But then, your “friend” loads every gun with two or three bullets. Are you still going to play? Would you be upset that he misrepresented your odds that blatantly?
Please understand that these are not “innocent mistakes” or “slight exaggerations.” That is the difference between 39% and 40%, not between 40% and less than one per cent. To make that kind of mistake, you need to have evil intent or suffer from gross incompetence.
Not an actual photo from hell but an AI-generated image.
But this President Mush Melon isn’t just someone setting out to destroy the American government and the confidence of people (though some snowflake liberals would say that’s quite bad enough). No, he’s also in charge of cars that are supposed to drive themselves. Would you want someone who has evil intent to be building cars that drive themselves? Oh, maybe he’s just grossly incompetent. Well—same question: Would you want someone grossly incompetent to be building cars that drive themselves? Oh, by the way, this same someone can download new software so that your car behaves differently!
No worries! The Cybertruck only has a top speed of 130 miles per hour and only weighs between 6600 and 10,000 pounds, so what could possibly go wrong? It’s not as though it could run over you in your driveway. Over and over and over and over.
AI-generated to the following prompt (keep in mind, AI technology is supposed to be driving your self-driving car). “A Tesla Cybertruck that is a dumpster fire”
But wait! There’s more! President Mush Melon also happens to own a company that controls communications satellites used for—-among other things—-war fighting and voting. No problems there, right? It’s all okay so long as there’s no evil intent or gross incompetence.
But wait! There’s more! The Mush Melon also happens to control a company that shoots missiles out over your head. And, the best part is—they never unexpectedly explode! Sure, they suffer from catastrophic unscheduled disassembly. But we’ve all had days like that.
Well, okay, sure there’s some danger having someone in charge of missiles when we know that person lies or suffers from massive incompetence, but hey—at least it’s not a pizza shop, right? You’d know a bad pizza soon after you bought it no matter how many lies the cook told you.
On the other hand, it might be some time before you see the impact of your self-driving truck under someone else’s control, or the results of cutting off crucial communications, or the havoc caused by missiles exploding—excuse me—-rapidly disassembling— at unscheduled times.
Though on the other hand, you might feel this is all worth it because, after all, this person makes billions and billions of dollars a year and therefore provides a huge influx of cash to the U.S. Treasure to the tune of nearly…
Wait…
Nothing? Nothing? Are you kidding? The supposedly richest man in the world pays zero income tax.
But he gives huge contributions of money to a Presidential candidate who then drops all the cases about Mush Melon’s frauds?
The Melon and the Felon: A marriage made in heaven. What’s a good name for the couple? I’m thinking just MF for short. We could call the Felon by 47 but what’s a special number of the Melon? Oh, there’s the form he is supposed to submit to Congress — FS-86. So, I suppose they could go by 8647 or 4786.
Typically, most of us think of friends as those who will stand by you through thick and thin. Sometimes, this means that they’re willing to encourage you when you’re down.
To me, a friend is also someone who is willing to give you frank feedback when you’re failing or making a mistake. If I’m doing something counter-productive or wrong, I’d generally like to know. A complement is okay, but I prefer sincere ones. To me, it would be demeaning for someone to lie about my accomplishments or abilities—demeaning to the person who gives such a false complement and demeaning to me as well.
It’s always struck me as an extremely nasty thing to give someone falsely flattering feedback. Of course, if you’re teaching a two year old to bat a ball—or, as I was doing a short time ago—encouraging our puppy to learn to swim—then you set your criterion for “success” fairly low. You don’t expect a two year old to grab a 38” bat, face a major league pitcher and hit a home run into the third deck of Yankee Stadium. You don’t expect a puppy to swim across the English Channel. You have to shape exceptional skill by rewarding behavior. You do it by beginning to reward any behavior that is “in the right direction.” At first, any contact a toddler makes when swinging a bat at a ball is rewarded. A puppy just learning to swim is initially rewarded even for going a few feet.
As a child matures physically and intellectually and learns a skill, you can give more instructive and more measured feedback. For instance, if a kid is learning to hit a baseball, you might give feedback about how solidly they’ve hit the ball. Soon, they’ll be capable of knowing that for themselves. They will see their hit pop up or trickle along the ground or instead streak away in a line drive. Eventually, after seeing many grounders, pop-ups, and line drives, they will know from the “feel” of the bat whether they’ve made solid contact.
Generally, if a person gets accurate feedback from others, they will learn to provide accurate feedback to themselves. If someone keeps doing badly but getting a “pass” constantly, or worse, having people flatter them when they’re doing badly, they’ll become disconnected from reality. This can happen, for instance, to a rich or influential person. The flatterers don’t do it to be kind. They do it to “get on the good side” of someone who is susceptible to such false feedback.
To me, telling an adult their performance is stellar when it actually stinks is typically not a kindness but an evil deed. Understand: I’m not using the word ‘evil’ to mean ‘counter-productive’ or ‘sub-optimal.’ I using the word ‘evil’ because I mean ‘evil.’
One result is that the person’s performance may not improve. Someone who might have become a decent hitter, or tennis player, or swimmer instead stays forever mediocre. What’s worse is that the person may decide to attempt to become a professional baseball player or tennis player when that will be a costly error.
If the flattered person is in some kind of position of authority, the result may be even worse. A police officer, manager, executive, teacher, or political figure who is doing a terrible job but being told they’re doing a great job is not only preventing them from reaching their own potential. They are harming others as well. And, the person giving such false feedback is also harming themselves, their friends, and their families. If they do it enough, they will not learn to look carefully at the behavior or others and give useful feedback. Eventually, they too become disconnected from reality.
Flattery is evil in business in that it’s a misdirection of effort based on lies. Flattery is evil in sports for the same reason. Art? Same. Music? Same. Parents flattering their kids does not build self-confidence. It builds false confidence, making them believe they can do more than they can; that they are expected to do more than they can. Eventually, when the child receives honest feedback from physical reality or from folks that don’t have any reason to flatter, they’ll feel worse than if they had had more honest feedback all along.
The most egregious form of fake flattery, however, occurs in dick-tater-$hits. When the autocrat takes cruel, destructive, or stupid actions, that autocrat is told by a circle of sycophants that his evil actions are wonderful, brilliant, magnanimous, etc. This devalues the person who says it; they lose all credibility. It is also a disservice to the person whose a$$ they are kissing. They are training him up to be even more evil and stupid. It is also a disservice to the very nature of humanity. The one thing we humans have going for us is our ability to coordinate and cooperate on very large scale projects. In order for that to work, we need to communicate. We need to communicate our wishes, our plans, the current state of progress, mistakes, ideas for how to fix them, and what we have learned. If everything we say is a lie, we create nothing. We provide no value. None.
True enough, parasites can live for a time off of the value that previous generations built. But once trust and honesty are destroyed, and the truth means nothing, we are no better than beasts except that we’re less hardy. A tribe of humans used to take down a mammoth. But even a much larger horde of humans, lying about what they are doing and looking out only for themselves? If our ancestors had acted like modern day dick-taters, humanity would not have survived.
Flattering your friend and fawning over them is not, in fact, friendship. It is freaky and frankly disgusting. It’s disgusting that anyone would find such behavior pleasurable. It’s disgusting that anyone would demand it. And it’s disgusting that anyone would engage in such false flattery.
Whatever your sensibilities of the aesthetics of human relations, however, such behavior is economically ruinous. It is antithetical to learning, to science, to progress, to improvement in the human condition.
Consistently ranked as one of the top ten Hospitals in America, this week, Massachusetts General Hospital was lucky enough to be visited by a crack team of hacker-jackers to improve the efficiency of the hospital. And, boy did they!! Pull up a chair and throw a log on the campfire, boys and girls. You’ll be amazed at how much money they saved!
And by “saved” I mean “saved from going into stupid, unglamorous things like bedpans and surgical masks and instead being funneled into the pockets of billionaires.” It’s not all that surprising. After all, it’s well known that poor people tend to waste their money on trivialities like food, clothing, shelter, and child care while billionaire geniuses tend to spend their money on important things like buying yachts, vacation homes, Judges on the US Extreme Court, and golden toilet seats.
We don’t typically think of surgeons as “poor people” but compared with the greediest people on the planet they sure are! The average salary of surgeons is only about 300 thousand dollars a year while world’s greediest man made over $200 billion! If we round down the surgeon salaries because they often pay taxes, we discover that he makes a million times more than a surgeon! So, it’s not really a great surprise that he can also make a hospital a million times more efficient!
First, President Mush discovered that every single patient seen at Mass General Hospital in its first one hundred years of existence (1811 to 1911) died! Yes, you heard that right: Died! Despite its reputation and ranking, not a single patient seen in that entire century is still alive!
(AI generated image to the prompt: “A graveyard with scores of tombstones. Each tombstone shows birth dates and death dates in the 1800’s.” Notice any issues?).
So, the first brilliant insight of The World’s Greediest Man is simply that Mass General Hospital is actually no better at preserving life than no hospital at all! Everyone who lived during those same years (1811 to 1911) and did not go to Mass General is also dead. There’s no difference! All that money wasted on medical care made no difference at all in the end.
A good workman doesn’t blame their tools. But that doesn’t mean that tools don’t differ in their efficiency. Surgeons, probably because they have a phallic fixation, prefer long thin tools like scalpels, catheters, and scissors. These are not tools for fast work though. For instance, a typical quadruple bypass surgery takes three to six hours! Are you kidding me!? No wonder hospitalization is so expensive.
President Mush and his cracker-jack hackers discovered that there is no part of the human anatomy that cannot be cut much faster with an ordinary chain saw. Sure, the feminized, woke, namby-pamby doctor boys will say that a chain saw isn’t delicate enough for heart surgery. How ridiculous is that? If it’s good enough to hack limbs off a tree, it’s good enough to hack cholesterol out of an artery or whatever the hell it is these pretty boys do during heart surgery.
(AI generated image to the prompt: “A hospital operating room with bright lights. A patient is on the table. The patient is being operated on by a surgeon wielding a chain saw.”)
Not only are there direct savings from having more efficient surgical tools. There are side benefits. When surgery takes three to six hours, time is wasted prepping the patient, giving them pain-killers, monitoring their vital signs, giving them blood—on and on and on. You don’t need such an elaborate set-up when you use a chain saw.
There are other advantages and cost-savings as well. There’s no room between here and the end of this article to list them all in detail, but you can take The World’s Greediest Man at his word. It doesn’t matter if he lies every day on the platform he bought to spout lies. He might lie about test results or political matters but certainly not when it comes to money.
One simple example arises from vastly simplified training programs. Limit doctoring to rich, white, Nazi, males since they are obviously superior. In fact, they are so superior that they demand every aspect of society be even more unfairly tilted so they are guaranteed a win in everything. That proves they’re superior. While training a doctor today takes more than a decade, you can show a rich, white, Nazi male how to run a chain saw in minutes!
For this and other reasons, formulas, fudging, faking, numbers, data, hand-waving, obfuscation, and moving things over three to ten decimal points, President Mush and his hacker-jacks will be able to cut over $5 trillion dollars from Medicare and Medicaid thus enabling an additional $500 trillion dollars to flow into the pockets of The World’s Greediest Man. These savings will also erase the national debt and cause water to flow uphill. Do the math!
This money, by the way, will not be spent on some stupid vanity project such as saving starving children or keeping the earth’s ecosystems from collapsing. Instead, it will be spent on something important and visionary—establishing a Cult Colony on Mars for President Mush and a carefully chosen cohort of consorts to populate the red planet.
Let’s face it. Earth is overrun with all sorts of life forms that are not The World’s Greediest Man. Why would anyone want that? Yech! Spiders! Bees! Trees! Birds! Bacteria, for God’s sake. Mold. Mushrooms. Flowers. Polar bears. Dragonflies. None of them is a problem on Mars. It’s got sand and rocks. And, once The Greediest Man on Earth is there, it will have everything it needs.
(AI generated image to the prompt: “Two rectangular panels. On the left is an image of a lush and beautiful garden with flowers, birds, and butterflies. On the right is an image of the Martian desert with no plants of any kind. Nothing green appears in the right hand image.” This was my fourth attempt to remove any plants from the image of Mars!)
At first, they seemed as though they were simply errors. In fact, they were the types of errors you’d expect an AI system to make if it’s “intelligence” were based on a fairly uncritical amalgam of ingesting a vast amount of written material. The strains of the Beatles Nowhere Man reverberate in my head. I no longer thing the mistakes are “innocent” mistakes. They are part of an overall effort to destroy human intelligence. That does not necessarily mean that some evil person somewhere said to themselves: “Let’s destroy human intelligence. Then, people will be more willing to accept AI as being intelligent.” It could be that the attempt to destroy human intelligence is more a side-effect of unrelenting greed and hubris than a well thought-out plot.
AI generated.
What errors am I talking about? The first set of errors I noticed happened when my wife specifically asked ChatGPT about my biography. Admittedly, my name is very common. When I worked at IBM, at one point, there were 22 employees with the name “John Thomas.” Probably, the most famous person with my name (John Charles Thomas) was an opera singer. “John Curtis Thomas” was a famous high jumper. The biographic summary produced by ChatGPT did include information about me—as well as several other people. If you know much at all about the real world, you know that a single person is very unlikely to hold academic positions in three different institutions and specializing in three different fields. ChatGPT didn’t blink though.
A few months ago, I wrote a blog post pointing out that we can never be in the same place twice. We’re spinning and spiraling through the universe at high speed. To make that statement more quantitative, I asked my search engine how far the sun travels through the galaxy in the course of a year. It gave an answer which seemed to check out with other sources and then—it gratuitously added this erroneous comment: “This is called a light year.”
What?
No. A “light year” is the distance light travels in a year, not how far the sun travels in a year.
What was more disturbing is that the answer was the first thing I saw. The search engine didn’t ask me if I wanted to try out an experimental AI system. It presented it as “the answer.”
But wait. There’s more. A few hours later, I demo’ed this and the offending notion about what constituted a light year was gone from the answer. Coincidence?
AI generated. I asked for a forest with rabbit ears instead of leaves. Does this fit the bill?
A few weeks later, I happened to be at a dinner and the conversation turned to Arabic. I mentioned that I had tried to learn a little in preparation for a possible assignment for IBM. I said that, in Arabic, verbs as well as nouns and adjectives are “gendered.” Someone said, “Oh, yes, it’s the same in Spanish.” No, it’s not. I checked with a query—not because I wasn’t sure—but in order to have “objective proof.” To my astonishment, when I asked, “Which language have gendered verbs, the answer came back to say that this was true of Romance languages and Slavic languages. It not true of Romance languages. Then, the AI system offered an example. That’s nice. But what the “example” actually shows is the verb not changing with gender. The next day, I went to replicate this error and it was gone. Coincidence?
Last Saturday, at the “Geezer’s Breakfast,” talk turned to politics and someone asked whether Alaska or Greenland was bigger. I entered a query something like: “Which is bigger? Greenland or Alaska.” I got back an AI summary. It compared the area of Greenland and Iceland. Following the AI summary were ten links, each of which compared Greenland and Iceland. I turned the question around: “Which is larger? Alaska or Greenland?” Now, the AI summary came back with the answer: “Alaska is larger with 586,000 square miles while Greenland is 836,300 square miles.”
AI generated. I asked for a map of the southern USA with the Gulf of Mexico labeled as “The Gulf of Ignorance” (You ready for an AI surgeon?)
What??
When I asked the same question a few minutes later, the comparison was fixed.
So…what the hell is going on? How is the AI system repairing its answers? Several possibilities spring to mind.
There is a team of people “checking on” the AI answers and repairing them. That seems unlikely to scale. Spot checking I could understand. Perhaps checking them in batch, but it’s as though the mistakes trigger a change that fixes that particular issue.
Way back in the late 1950’s/early 1960’s, Arthur Lee Samuel developed a program to play checkers. The machine had various versions that played against each other in order to improve play faster than could be done by having the checker player play human opponents. This general idea has been used in AI many times since.
One possible explanation of the AI self-correction is that the AI system has a variety of different “versions” that answer question. For simplicity of explanation, let’s say there are ten, numbered 1 through 10. Randomly, when a user asks a question, they get one version’s answer; let’s say they get an answer based on version 7. After the question is “answered” by version 7, its answer is compared to the consensus answer of all ten. If the system is lucky, most of the other nine versions will answer correctly. This provides feedback that will allow the system to improve.
There is a more paranoid explanation. At least, a few years ago, I would have considered it paranoid because I like to give people the benefit of the doubt and I vastly underestimated just how evil some of the greediest people on the planet really are. So, now, what I’m about to propose, while I still consider it paranoid, is not nearly so paranoid as it would have seemed a few years ago.
MORE! MORE! MORE!
Not only have I discovered that the ultra-greedy are short-sighted enough to usher in a dictatorship that will destroy them and their wealth (read what Putin did and Stalin before him), but I have noticed an incredible number of times in the last few years where a topic that I am talking about ends up being followed within minutes by ads about products and services relevant to that conversation. Coincidence?
Possibly. But it’s also possible that the likes of Alexa and Siri are constantly listening in and it is my feedback that is being used to signal that the AI system has just given the wrong answer.
Also possible: AI systems are giving occasional wrong answers on purpose. But why? They could be intentionally propagating enough lies to make people question whether truth exist but not enough lies to make us simply stop trusting AI systems. Who would benefit from that? In the long run, absolutely no-one. But in the short term, it helps people who aim to disenfranchise everyone but the very greediest.
Next step: See whether the AI immediately self-corrects even without my indicating that it made a mistake.
Meanwhile, it should also be noted that promulgating AI is only one prong of a two-pronged attack on natural intelligence. The other prong is the loud, persistent, threatening drumbeat of false narrative excuses for stupidity that we (Americans as well as the world) are supposed to take as excuses. America is again touting non-cures for serious disease and making excuses for egregious security breaches rather than admitting to error and searching for how to ensure they never happen again.
AI-generated image to the prompt: A man trips over a log which makes him spill an armload of cakes. (How exactly was he carrying this armload of cakes? How does one not notice a log this large? Perhaps having three legs makes in more confusing to step over? Are you ready for an AI surgeon now?)