by John Manning and Sebastian Zein

Is there a market failure for factual information?

In 2004, Eli Noam, a Columbia University economics professor, observed that technology is pushing-down the price of information to almost nothing. Constant downward pressure, he warned, could push prices so low that a market structure would be impossible to sustain.

It’s hard to remember, but this was a time when Facebook was a niche club for Ivy League kids to show off to each other. Professor Noam was far ahead of the curve. Much discussion has focused on how algorithmic social media, primarily but not only Facebook, has enabled and encouraged the distribution of fake news. But his point is as true today as it was in the pre-algorithmic era: technology has largely destroyed the market structure that used to support fact-based journalism.

While traditional media is by no means free its share of pretenders muddying the water, higher barriers to entry (sustained by the need to maintain costly physical infrastructure) historically limited the supply of information to an extent that prices could be supported, so long as there was a large market for the product.  This in turn led to “news” for the masses, that, as journalistic standards have evolved, has kept its integrity, to a more or lesser degree, through fact-checking and severe ramifications when things turn out to be false.

But as we have now entered an era of an overabundance of information, with little disincentive to publish material that is demonstrably false, and click-based advertising often easily covering the marginal cost of such production, we have to ask the question: is there a market failure for high-quality, factual information? And if so, how and when is this scenario going to be resolved?  And what will happen in the meantime?

If you haven’t taken a look at the findings of the Nov. 2016 Stanford study on middle, high, and post-secondary students’ abilities to recognize fake news, you should.

We see it this way: a healthy democracy relies on a shared body of facts from which voters make their choices. And if we are all operating from a different set of facts, some of which are not facts at all, how are we to have meaningful policy debates in public?

Market remedies for information asymmetry do exist: Carfax or CarProof make businesses out of telling you if you’re about to buy a lemon, Yelp exists to tell you where it’s worthwhile to dine-out, etc. And yes, you can pay to get your news from one or more reputable organizations. But given that Facebook still largely treats real news AND fake news the same way it treats posts about selfies and babyshowers, and Facebook is up to 40% of all time spent online by some estimates, there should be no denying we have a real problem.

Facebook is already experimenting with a peer-rated negative certification system. With the recent elections in Germany, a feature has been deployed allowing users to report a post for suspected fake news. This triggers a referral to 3rd party fact checkers, less favorable algorithm treatment, and (if found to be untrue) a warning tag accompanying the post and a warning note for subsequent users intending to share the article.

More features will undoubted come out. But when that will happen, whether it will be enough, and what damage will be done in the meantime (and potentially afterwards) – these are the questions that we all should be asking right now.

Is there a market failure for factual information?

by John Manning and Sebastian Zein

In 2004, Eli Noam, a Columbia University economics professor, observed that technology is pushing-down the price of information to almost nothing. Constant downward pressure, he warned, could push prices so low that a market structure would be impossible to sustain.

It’s hard to remember, but this was a time when Facebook was a niche club for Ivy League kids to show off to each other. Professor Noam was far ahead of the curve. Much discussion has focused on how algorithmic social media, primarily but not only Facebook, has enabled and encouraged the distribution of fake news. But his point is as true today as it was in the pre-algorithmic era: technology has largely destroyed the market structure that used to support fact-based journalism.

While traditional media is by no means free its share of pretenders muddying the water, higher barriers to entry (sustained by the need to maintain costly physical infrastructure) historically limited the supply of information to an extent that prices could be supported, so long as there was a large market for the product.  This in turn led to “news” for the masses, that, as journalistic standards have evolved, has kept its integrity, to a more or lesser degree, through fact-checking and severe ramifications when things turn out to be false.

But as we have now entered an era of an overabundance of information, with little disincentive to publish material that is demonstrably false, and click-based advertising often easily covering the marginal cost of such production, we have to ask the question: is there a market failure for high-quality, factual information? And if so, how and when is this scenario going to be resolved?  And what will happen in the meantime?

If you haven’t taken a look at the findings of the Nov. 2016 Stanford study on middle, high, and post-secondary students’ abilities to recognize fake news, you should.

We see it this way: a healthy democracy relies on a shared body of facts from which voters make their choices. And if we are all operating from a different set of facts, some of which are not facts at all, how are we to have meaningful policy debates in public?

Market remedies for information asymmetry do exist: Carfax or CarProof make businesses out of telling you if you’re about to buy a lemon, Yelp exists to tell you where it’s worthwhile to dine-out, etc. And yes, you can pay to get your news from one or more reputable organizations. But given that Facebook still largely treats real news AND fake news the same way it treats posts about selfies and babyshowers, and Facebook is up to 40% of all time spent online by some estimates, there should be no denying we have a real problem.

Facebook is already experimenting with a peer-rated negative certification system. With the recent elections in Germany, a feature has been deployed allowing users to report a post for suspected fake news. This triggers a referral to 3rd party fact checkers, less favorable algorithm treatment, and (if found to be untrue) a warning tag accompanying the post and a warning note for subsequent users intending to share the article.

More features will undoubted come out. But when that will happen, whether it will be enough, and what damage will be done in the meantime (and potentially afterwards) – these are the questions that we all should be asking right now.

Looking for out-of-this-world 

See Our Work