Can Antitrust Law Save Innovative Ideas and Freedom of Speech?

antitrust-freedom-of-speech-300x200

Author: Jarod Bona

We are all connected. When something happens anywhere, we know about it everywhere. If someone has a great idea, they can tell everyone about it, right away.

These connections create incredible value. We as a society can take the best ideas and build upon them. Information as a resource is incredibly cheap and easily available. Filtering is the real problem now, but more on that later.

So, this is all great, but I worry. There is a downside to this over-connectedness. But more on that later.

There was once a time when people were worried that government would suppress speech, ideas, and innovation. The government still does this, of course. But it seems like there is less worry about it. In many ways, the government doesn’t have as much power as it used to have. That is, in part, because of our connectedness.

Society can speak swiftly and harshly toward government action that is excessive, unfair, or wrong (in society’s judgment). This creates a check on government conduct, in the same way that many people used to consider the press the fourth branch of government. Indeed, the collective voice of the people on social media has all but replaced the press, who now, like everyone else, tend to mostly pick sides.

Society, of course, doesn’t speak in one voice, but a conventional wisdom often develops and if you don’t follow it, you are criticized, harshly. This isn’t the case for every issue, obviously, but for many.

Back in the old days, speech took place in public forums—maybe a park, a conventional hall, or even a mall. That still happens, but you can only reach so many people at one place (unless you video it and post it on YouTube, for example). So the government’s role in this type of speech is much less.

Some influential speech still takes place on television. But people seem to be watching that less and less. Now, the most influential speech takes place online, mostly filtered through technology and social media companies like Google, Twitter, Facebook, etc.

I’ve mentioned the term “filter” several times now. There is so much content on these platforms that if you don’t filter it, the amount is so overwhelming, it just isn’t useful. Most people don’t have time to go through everything everyone says. So a filter is necessary.

If the government were to “filter” the speech at a public park, the ACLU and a bunch of other organizations would file a First Amendment Lawsuit, and that is a good thing.

But now, the most influential forums for speech must be filtered. But how? I am not sure any one person really knows. The technology and social media companies use something called an algorithm to do the work. That seems like it could be a good idea, assuming the algorithm is a good one. But how do we know?

Should the algorithm vary depending upon the recipient information stream? Again, that seems like a good idea. Having something completely customized for you is sort of fancy and certainly helpful. We all have different interests, needs, etc. But we all have “views” on certain issues too. As we develop wisdom, experience, and knowledge, those ideas should evolve and, in some cases, change dramatically. My views are not the same as they were twenty years ago. Yours probably weren’t either.

The idea that each of us has plastic unchanging views on everything from the origins of the universe to how to run a business to the most controversial political topics of the day is foolish.

But an algorithm that sends you a customized information stream is likely to send you a disproportionate amount of information that matches your present views and interests. In a way, that locks you into where you are—inhibiting your own growth. What if we aggregate that to, well, everyone on the planet that is on the internet. Uh oh!

What else would be in a good algorithm? I suppose there would be some general rules that filter good information from bad information. Who wants bad information? Indeed, with the “fake news” phenomenon, we certainly want the technology and social media companies to filter out “fake news,” don’t we?

And if we are trying to stay healthy, it would be a good idea for the algorithm to filter out health claims that aren’t in the mainstream, right? And what about science? There are a lot of crackpot theories out there—how great to be able to filter those out too, so we just receive information about theories with strong support, the sort that most scientists in that subtopic would agree are accurate.

There are, of course, also dangerous ideas. You better filter those out—that is for the good of society. And the technology companies are reactive to society, i.e. the majority, so they will take care of that for you.

So now you receive a customized news, ideas, science and health feed that is meant for you and for society. The bad stuff is gone and you can pay attention to just the good stuff.

Take a short break, read your feed on your social media account of your choice, then come back. . . . .

. . . . [Hours later]. Welcome back.

Now think about whether there are any ideas—political, health, scientific, societal—that were conventional wisdom 10, 20, 30, 50, or 100 years ago, that now look like really bad ideas. Can you think of any?

If you went back 10, 20, 30, 50, or 100 years and did the same exercise about previous time periods, do you think you would be able to find any ideas that the majority believed with all their heart, but now seem silly, damaging, or deadly?

What if during each of those time periods, we created some science-fiction-like existence in which we purged any voices that questioned that conventional wisdom? What if anyone that disagreed with what everyone else thought was attacked and socially exiled? That’s happened, of course. But what if in this crazy future world, we scaled it to be just about everyone in the world? And we did it in two ways—we only sent people information they liked and believed, and we filtered out the “bad” information to everyone in the world?

If that happened, I wonder if we’d achieve the same breakthroughs in exposing the ideas that, though everyone else believes them, they, in retrospect are silly, damaging, or deadly. Would we lock ourselves—individually and as a society (making up the entire world)—into where we are right now?

So, I suppose this connectedness and inevitable filtering (by someone or something) has both upsides and downsides. We have done well so far with exploiting, positively, the upsides. The market has rushed in and extracted enormous value from this opportunity, as we would expect.

Connectedness is great; until it isn’t. For that, please see the Great Recession, the connectedness of the world economy, and Nassim Taleb’s warning about the Black Swan. (As an aside, I highly recommend that you read Nassim Taleb. In my view, he is one of the most insightful thinkers (and doers) of our day. He wrote one of my favorite books, Antifragile: Things That Gain from Disorder. Highly recommended).

//ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&ASIN=0812979680&Format=_SL160_&ID=AsinImage&MarketPlace=US&ServiceVersion=20070822&WS=1&tag=antitrustattorney-20&language=en_UShttps://ir-na.amazon-adsystem.com/e/ir?t=antitrustattorney-20&language=en_US&l=li2&o=1&a=0812979680

Have we come to terms with the downside of this connectedness? Periodically someone will speak up about it, but nothing really happens.

If the government were filtering the information, I suppose we would see a lot of First Amendment lawsuits. But constitutional claims tend to require state action and it is the private technology companies that are doing the filtering. If they are so large and powerful and have such a monopoly over speech, should we consider that they are, in fact, exercising state action, making them subject to the First Amendment? That approach would have such extensive and far-reaching implications that we’d need to really think through it. I don’t have the answer.

Should the government regulate the filtering? That seems even scarier. I suppose if the government did regulate it, their actions would be subject to First Amendment scrutiny. But that only roughly helps. And do you really want a middling bureaucrat or political hack in charge of these algorithms? I don’t.

What else is there? We don’t want to overreact and just prohibit social media or widespread (and obviously filtered) search, which has incredible value.

Can Antitrust Law Help?

This isn’t a rhetorical question. I am asking because I don’t know the answer and am interested in your comments. But let’s think through it together.

Antitrust is good at maintaining structural competition. It isn’t good at regulating specific behaviors. For further comments on that, please see my article about behavioral remedies in merger regulation.

Right now, the big technology companies that filter for us rely on network effects. They are successful because everyone uses them. That is, a necessary part of their value is that everyone is on their platform. So any sort of blanket limit on size or overall use will knock out the benefits, and the benefits are considerable.

Most antitrust analysis relies on a model that seeks to maximize consumer welfare. What if we added to the antitrust analysis this concern about large private companies controlling speech and indirectly limiting innovation, through their filtering decisions?

On its surface, that sounds like a good idea. A company that abuses its filtering power could be subject to antitrust scrutiny, or something like that. But, unfortunately, that idea is flawed too, and severely so.

In the earlier days of antitrust, courts and government entities would sometimes have multiple and necessarily competing goals for antitrust enforcement. The result of that framework is that (1) antitrust decision makers by choosing among multiple goals are making policy determinations; (2) antitrust decisions become arbitrary and possibly abusive; and (3) we lose all predictability in antitrust enforcement, which causes a severe negative ripple effect on the economy.

One thing that is great about our current world is that technology and innovation happen so quickly—in part because of our connectedness—that monopolies are often toppled not necessarily by a market competitor, but by someone that invents an entirely new market.

(I realize, by the way, that by celebrating the great innovation of our day at the same time that I worry about its demise, it sounds like I am making a contradiction. But it is possible that some innovation thrives at the same time that other innovation suffers. If we can create the conditions for both to succeed, we really win).

So by creating the greatest conditions for the greatest competition in these platform markets and possible new markets that filter our information, we may best protect from filtering abuse. And antitrust can help by improving competitive conditions.

But how are consumers or anyone else going to judge whether there is filtering abuse? Do we even know what filtering abuse is? Could we recognize it if we saw it?

If the problem were high prices or a low-quality product that breaks easily, the market will react swiftly. But what if the problem is both hidden and more damaging to the whole than to any particular individual?

Sometimes requiring greater information can improve markets because purchasers can make better more informed decisions. Will that help here? If Google, for example, had to publish their algorithms, they would (1) be forced to give away valuable proprietary information; and (2) confuse most people, as the code is probably quite complex. And even if the coders of the world helped us, that would probably devolve into “society” debating about the proper algorithm and the situation would likely get worse, not better (remember, filtering toward the current wants and desire of society, the majority, is part of the problem).

My initial view is that the best way that antitrust law can help this situation is by remaining vigilant to create the best conditions for competition. That must be paired with minimal non-antitrust government interference, as government intervention tends to thwart the marketplace by locking in existing competitors and raising entry barriers for new competitors.

Do you have a better idea? If so, let me know.

 

photo credit: Peter Ras Social Media via photopin (license)

Contact Information