Most of the suburban neighborhoods of my home town were quite similar. They typically had a long row of very similar houses and each house had a yard. That yard was filled with neatly mowed grass. Around the house were bushes. The size of the house varied from neighborhood to neighborhood but lets go back to the part where all the houses “decided” to landscape their yard in a very similar way. No-one, so far as I know, ever threatened a person whose lawn was not well-mowed. For the most part, people voluntarily kept their yards looking “nice” although within very narrow bands of taste. Having the neighborhood embody variations on a theme actually made the neighborhood as a whole look nice. Neighborhoods would have had a quite different look and feel if everyone competed on how high they could grow the bushes and trees throughout their entire property!
And, at a larger level, that’s how I feel about the rampant sensationalism in advertising. I visited a small college town once for a job interview and the only signs were moderately sized wooden signs that said, “Post Office” or “USABank” or “Barber Shop” — and it all worked for everyone in that small locale. But when there’s enough money to “go national” businesses get into an arms race to grab your attention with all bold type and much worse tricks. But it all just becomes harder to read. It makes buying and selling actually more random as people need to filter out all that crap that they are exposed to nearly every waking moment.So, I actually think these huge ad budgets are exactly a kind of tragedy of the commons. All the companies would be better off with more low key and quieter ads and so would we. And, consumers could make more intelligent choices because they would be exposed to little enough information to make some sense of it.
Is there a way for advertisers, regulators, the public, to unwind toward less sensationalistic advertising? I don’t know whether it’s possible or even desirable. But it’s worth considering.
It’s also worth considering experimenting with a different set of algorithms for social media. In their current instantiation, social media are a bad alternative to face to face meetings. In real physical space when people meet face to face, for a number of reasons, they generally act civilly. This doesn’t always happen obviously, but it generally does even if people do not agree. Actual fights at school board meetings, Congress, state legislatures, town hall meetings, high school debates, bowling leagues, assemblies, work meetings occur rarely.
Filtering and Bandwagon Effects
The social media that I know of have algorithms to filter what is shown to you. These algorithms work behind the scenes showing you and me just those things that are meant to maximize profits for the social media company. Yes, true enough, there is an intermediate goal of pleasing the user. But rest assured, if there were a way to displease the user and make more money, that’s what would be done. It’s important to keep in mind the intermediate as opposed to the ultimate goal. Of course, you realize that the social media company is out to make money. And, you also know from your own experience, that the social media company suggests things to you and shows you ads. In some cases, you also directly pay the social media company, perhaps for enhanced capabilities.
Despite not knowing the details of these unknown algorithms, I can make some educated guesses. For instance, on Facebook, we are presented with a scroll of posts. Generally, these originate from people you are “friends” with on Facebook. Ads and sponsored pages (=Ads that don’t look like ads) weasel there way in there as well. But I have hundreds of friends on Facebook. Which ones actually appear in the feed? Likely inputs to that decision are: how many times I hesitated and for how long on previous posts for that person. Most likely, that weighting function is moderated by a recency & frequency metric. In addition, choosing an emoticon, would give that person a bigger bump and a still larger bump would come from commenting on a post. The biggest bump of all would come from a “Share.” It’s possible, but seems to me unlikely, that Facebook might actually do some natural language processing on the comment contents to see whether the reaction text is positive or negative about the post. I think it likely that Facebook may also assign some indirect positive weight. If many of my friends, especially those highly “valued” according to FB algorithm, like a post, then I am more likely to see it as well.
Let’s assume for the purposes of argument that the above speculations are more or less accurate. Clearly, there are unintended consequences if these are the only measures that the algorithm considers. For example, say I am friends with people Claude and Carol. I play tennis frequently with Claude and met him about a year ago. Carol, on the other hand, I’ve known for fifty years and I find everything she reads, thinks, etc. fascinating. As it turns out, Claude posts about 30 times a day and a lot of the stuff is rather cute. So if I see it, I may click a “Like.” Carol, on the other hand, posts maybe once every week or two. Whatever it is, it is interesting and I often comment or share it. Because Carol posts so infrequently, I don’t even notice that I haven’t seen her on FB for the last three weeks. Meanwhile, at long last Carol posts: “Hey guys. Recovering from accident. More later.” But do I see it? I haven’t paused on, liked, commented on or shared any of Carol’s posts because she hasn’t had any. It’s quite possible that Carol’s post will never get to the top of my queue. If I then fail to see this post, Carol’s “rank” will go down even further. Having a post get high in your queue probably depends to some extent on the content as well as the accompanying media. I like videos in general, and perhaps I like posts about “Human Computer Interaction.” This so happens to be what Carol typically posts on. Her most recent post, however, has nothing topic-wise to recommend it to me. The keywords that may be looked at “guys” “accident” “recovering” are not generally topics that interest me. So, because of the unusual and “uninteresting” post, I’m even less likely to see the post about my good friend, Carol.
In any case, I can pretty much guarantee that whether these algorithms are good or bad for society as a whole has not been a top priority in design meetings. Perhaps there is a way for the user to be able to push and pull the priorities in various ways to achieve a panoply of different results. In fact, one can imagine an open system environment in which dispersed and diverse groups offer up various add-on capabilities. This is an alternative to having one giant company control how we see and react to each other.
The “Bandwagon Effect” refers to social media algorithms putting high priority to show those items that already have more pauses, likes, comments and shares in the case of FaceBook. In Twitter terms, it would be likes and retweets. Thought of in terms of viewing humanity as a giant neural net, the bandwagon effect is a sharpening to the first stimulus that pops up. This is less than the intelligence of an earthworm! We should be able to arrange a multi-layer, highly interconnected network of people to have a more intelligent and nuanced reaction than “WOW!” And, yet, every time one of these idiotic tsunamis of insanity gone viral, it interferes in a very real sense with your ability to keep up with the people whom you actually know.
Companies need to carefully consider ways to insure people’s identities are broadly consistent with reality. I do not think it would be okay for me to have an account on twitter, for instance, that has a name like “Donald_Trump” or “Barack_Obama” if I have no official relationship to the real people who are most likely referenced by these labels. This is even more serious if I am really using a moniker to get people to see my posts when my real goal is to trash these political figures.
My FB profile says I worked at IBM Research and Verizon and studied at the University of Michigan. Does FB do any work to verify these claims? After all, if I make a comment about IBM, people may reasonably put a little more credence on that comment if they know I worked at IBM Research than if I just made that up out of whole cloth. As we have recently discovered, some “fake” accounts that claimed to be US citizens concerned about our country were actually the accounts of Russians who were intentionally trying to foment discontent in America. Things of a similar nature are being used to disrupt and divide other Western democracies.
Similarly, my LinkedIn profile is even more detailed with degrees, work experiences, and other details. But suppose I present myself (falsely) as a highly experienced diplomat with widespread middle east experience. Won’t people who read my various posts and comments about the middle east put more weight on my opinion if I claim to know something about it? The question is, however, does LinkedIn do anything to verify the claims a person makes about their experience and background?
I am not picking on these specific social media platforms. They are among the most popular and are three I happen to be active on. That’s the only reason I chose them. But do any of them make attempts to verify the information? Sure, you could argue it’s up to the individual to do this kind of checking but that’s insane. My name, for example, John Thomas, is an extremely common name. It’s not that trivial, even with google, to distinguish my actual publications, background, etc. from others with the same name, even for me. Wouldn’t it be a lot more efficient for, say, LinkedIn to at least lightly verify that I worked at IBM than for every one of my 3000 connections on LinkedIn to do it themselves? Part of the value of the social media platforms is in the profiles that people create. Is it to much to ask for the social media people to do any checking? Don’t we expect the FDA to at least spot check that things labelled as “beef” actually contain healthy cow meat and not rotted horse meat? We don’t allow people to get away with fake credit cards or driver’s licenses and with good reason. Who makes sure these social media profiles contain reasonably accurate information? Who should? It would be one thing if these media were simply used as occasional sources of entertainment. But that’s not the case! People rely on FB, for instance, for their news!
In the absence of any checking, most people, me included, are putting up “real” information about ourselves, but others are completely lying perhaps as part of a small personal scam, but more crucially as part of an international attempt to divide America and other western democracies. True enough, FB terms of service ask for the help of users to put up real information about themselves. But we have learned that some accounts were not even telling the truth about their country of origin. This is not okay, folks. This is not okay.
Could or should social media do more to enforce some kind of civility in the content? This may admittedly be difficult to implement. Currently, social media do have various “Terms of Service” meant to move people toward civility but real civility is much more than simply avoiding swear words. It is easy to avoid being blocked and still “say” the swear word in a number of ways such as embedding or substituting other characters. You know I mean a**hole and I know I mean it. No one thinks it is short for a parameter “a” raised to the power “hole.” But even if smarter algorithms detected and deleted disguised swear words, it would only address a small part of the problem.
As I have blogged on many occasions, another part of the problem is likely due to society’s rush and that, in turn, is reflected by limits such as (until recently) Twitter’s limit of 140 characters. I personally like the restriction since it provides a creative opportunity. However, even in my most creative mood, I find it very difficult, in 140 (or even 280) characters to acknowledge your point, restate it, and then move forward some kind of reasoned dialogue about an issue we disagree on.
Research and suggestions about how to make on-line environments more constructive have been published for awhile. For example, lac, of anonymity and human moderation appear to be critical. One can also create better communities, perhaps by using levels of intimacy and trust. In the physical architecture of a home, for example, Christopher Alexander points out that most homes have a gradient from public to private space. The front porch, for instance, is somewhat public. Your vestibule or entry is somewhat private but you may let in the pizza delivery man. People would have to be further vetted to be allowed into your living room. Traditionally, the bedroom and inner garden would be still more private and reserved for fewer people.
In some cases, people may type something that is unintentionally uncivil. When you speak face to face, you can see the reactions of the other person immediately. This allows you to get feedback in real time and discover immediately that you may be causing an emotional reaction in the other person. You may choose to moderate your speech accordingly. In addition, when you speak, you say things in a particular tone of voice with a particular prosody. I might say, “Wow. That is a really interesting dress.” I could say this and sincerely mean precisely that. If I type those words, however, you do not actually hear my voice. Instead, you “hear” these words mentally with the intonation you put on them. You may hear me say it sarcastically even though it was not intended that way. Alternatively, you could “hear” me say those words suggestively, as a come on, even though I intended nothing of the sort.
In couples therapy, people are often encouraged to use “I talk” instead of “You talk.” What this means is that it works more productively for me to talk about how I feel about you and what you do than about what you do and how you should change. It also works better to be specific and to seek a solution rather than to be general. For example, let’s suppose I find my socks scattered all about the house. It works better to say, “This evening, after a hard day at work, I felt a sense of eager anticipation as I opened the front door. Then, when I saw socks strewn about the living room, my heart sank. I would be really happy if I saw no scattered sox.” than to say, “You are such a slob! You don’t care about my sox. You always strew them everywhere!” Your spouse is much more likely to react favorably to the first statement than the second. Of course, in our case, the real culprits are the cats. And no amount of coaxing or coaching, however lovingly I couch it, will convince the cats from strewing my sox about. If I want them to quit, I will have to put the sox out of reach. Similarly, people being what they are, one cannot simply ask them to behave well. The situation must include guidance and enforced penalties for misbehavior as well as perceived benefits for good behavior. Should companies provide (optional?) guidelines on rules of discourse such as being specific and using I-Talk?
While the formal properties and terms of service of the social media may be a strong force in influencing behavior, they are not determinative. For example, in the early days of AOL, there were “chat rooms” which allowed up to 21 or 22 people to enter. People could only input a couple lines at a time. Most chat rooms that I explored were largely filled with “age sex location checks” and trivial talk. I tried on several occasions to engage people in more serious debate and discussion on issues of importance to the future of civilization. My wife made similar attempts. Generally these attempts failed. But on some occasions, we both entered the same chat room and began more serious discussion. On these occasions, people were much more likely to move to that type of interaction than if just one of us tried it alone.
At this time, there were several “Native American” chat rooms. These chat rooms were completely different from the “typical ones.” I could “tell a story” — a long story — two lines at a time and no-one would interrupt. When I finished a story, people would comment. After that, someone else would “tell” a long story — again without interruption for perhaps a half hour or more. At the end of that, people would comment on the story. So, the formal characteristics of the medium could prove adequate for several quite different modes of communication depending on how people acted.
If you read the “Terms of Service” of various social media, you may quickly come to the conclusion that their main motivation is to make money. After all, they are for-profit corporations. However, it seems clear that some thought has been given to safety and privacy concerns. It’s less clear that much consideration has been given to how these social media may be shaping (or misshaping?) society as a whole.
We drive our private cars on public roads. We have considerable freedom in how we drive and when we drive and how we drive. But we are not allowed to drive north on a one-way, southbound street. We are not allowed to weave in and out of traffic or speed recklessly nor block traffic by sitting still in the middle of the road. The car manufacturers do not control these laws. They are enacted for the benefit of society as a whole. Safety is a large consideration, but not the only one. (If it were, we might have a world-wide speed limit of 35 or 40 mph). The rules recognize that safety is important but so is “reasonable” speed. We tolerate a fair number of deaths every year in order to accommodate speed. But if we were killing half the population, we would insist on changing the rules. Perhaps it is time to start considering changing the rules about how we use social media. Perhaps the Terms of Service should not be the sole province of the company’s who provide the platform any more than the construction companies that build our roads are the sole determiner of traffic laws, fines, and penalties.
There are many other thoughts on media, its impact on society, and how to make it a better force for good. Here is just a small sample.