Oct 13 2017
A Poor Marker of Truth
As a recent Atlantic article recounts, in the early 1800s steamed powered printing presses were making the distribution of information cheaper and faster. It didn’t take long for someone to figure out that this was an opportunity. In 1833 Benjamin Day (who was just 23 – the Zuckerberg of his age) founded the New York Sun.
The paper was the first of the “penny press” – sold for just a penny to increase distribution, and then monetized through advertising. This was a new paradigm – Day was not really selling information to the masses, he was selling the attention of the masses to advertisers. This flipped the incentives. He no longer had an incentive to produce quality information (because information was not the product), but rather to print whatever information got the most attention (which was his product).
So, in 1835 Day printed a series of stories about how astronomers, using a new telescope, were seeing bat people on the moon. The story “went viral” and fooled most people. It took rival newspapers to debunk the stories until Day finally admitted the whole thing was a hoax. That hoax may have been over, but it spawned an age of tabloids that continues to this day.
The printing press of the 21st century, of course, is the internet, and attention is the coin of the realm. This creates an inherent dilemma for our society – because attention is a poor marker of truth.
The internet is not just a cheap, fast, and easy way to spread information. It is also a force multiplier. Small information campaigns can end up having a massive effect, for two important reasons. One is that the inherent structure of the web allows for and encourages the spread of information. Some kinds of information spread faster and wider than others. So we need to ask ourselves – what features of information will make it spread more through social media? It’s not accuracy, or thoroughness, or fairness. Bite-sized nuggets of drama or humor seem to do the best. If your information is unencumbered by reality, that is an advantage.
Second, information can be targeted. You don’t necessarily have to get a story to as many people as possible, just the right people. Social media algorithms, designed to meet our apparent desires, make this not only possible, not only easy, but almost the default.
Some will argue that this is all for the good. This is the democratization of information, and we should just let the free-market of ideas sort it out. While there is a kernel of truth here, this view is also profoundly naive in my opinion.
I am a strong believer in the power of the marketplace. It is, essentially, a bottom-up evolutionary force that generates information through the individual decisions of countless actors. This power should be harnessed.
However, marketplaces are not voids. They have structure and rules, and those rules have a significant influence on outcome. We need to explore how the marketplace itself is influencing outcomes. Keeping with the evolutionary analogy – the marketplace is like the environment. Evolution adapts populations to the environment. But also the behavior of individuals in the various populations helps shape the environment.
So – we need to think about human nature and how that interacts with the marketplace of ideas. As I stated above, it is pretty clear that if the marketplace favors attention above all else, then informational products that favor attention will dominate. But this may not be in the best interest of our society. It’s also not what most people actually want, but rather may just be the path of least resistance.
As an example, most people don’t want to be overweight, but they get there often just by going with the flow of their natural behavior in the context of an environment where the marketplace favors calorie-dense foods and large portions. Therefore, there may be a disconnect between what people want when they think about it, and how they behave by default.
Similarly, most people would probably indicate that they do not want to be fed misinformation, lies, and entertaining hoaxes, even though that is exactly what they will buy on the checkout line of the grocery store. We may want accurate and true information, but then guarantee that is exactly what we will not get by the links we click and the social media we frequent.
So – should we give people what they choose with impulsive or default behavior, or what they say they want when they actually take the time to consider their choices? There is no easy answer here, because any method we choose to will inevitably require someone having control over the choices available to us, and will likely have unintended and unwanted consequences.
There is some low-hanging fruit here, some win-wins that we should definitely do. For example, printing calories on a menu is not restricting anyone’s choice, but informing that choice at the time it is being made (recent data suggests this might actually work).
Likewise, maybe Facebook, Google, and YouTube should not by default set their algorithms to give you information that will thoroughly encase you in an echochamber that may reflect your choices but not your desires. Maybe they shouldn’t sell ads to Russian propaganda outlets trying to upset our elections.
There may also be utility in exploring ways to label news stories like restaurants label menu items. But again – who gets to decide what is “fake news?” This used to be the job of editors, but there role is diminished in today’s world. Still, some kind of transparent and reasonable labeling system for news sources would probably be a net benefit.
Ultimately, the best solution is for individual providers of information to be ethical, to actually care about the truth. Individual consumers of information also need to be discriminating and skeptical. This is a cultural and educational phenomenon that no algorithm will fix.
People have to care about truth and accuracy, and then need to know what that means. Like so many issues I deal with, therefore, this ultimately gets back to educating the public to be better critical thinkers.