Two Supreme Court Cases That Could Break the Internet
In February, the Supreme Court will hear two cases, Twitter v. Taamneh and Gonzalez v. Google, that could change how the Internet is regulated, with potentially huge implications. Both cases concern Section 230 of the Communications Decency Act of 1996, which grants legal immunity to internet platforms for content posted by users. The plaintiffs in each case allege the platforms violated federal anti-terrorism statutes by allowing the content to remain online. (Section 230 has a section on content that violates federal law.) Meanwhile, the justices are deciding whether to hear two other cases, in Texas and Florida, about whether Internet providers can censor what they say is political content. offensive or dangerous. The laws arose out of claims that providers were suppressing conservative voices.
To talk about how these cases could change the Internet, I recently spoke by phone with Daphne Keller, who teaches at Stanford Law School and directs the Platform Regulation Program at the Stanford Cyber Policy Center. (He served as Google’s general counsel until 2015.) In our conversation, which has been edited for length and clarity, we discussed what Section 230 actually does, the different approaches the Court can take in interpreting the law and why the way platforms are regulated has unintended consequences.
How prepared should people be for the Supreme Court to fundamentally change the way the Internet works?
We should be prepared for the Court to change a lot about how the Internet works, but I think they could go in so many different directions that it’s very difficult to predict the nature of the change or what anyone should do in anticipation of it.
Until now, Internet platforms could allow users to share speech fairly freely, for better or for worse, and they had immunity from liability for much of what their users said. This is the law known as the colloquial Section 230, which is probably the most misunderstood, misreported and hated law on the internet. It provides immunity from certain types of platform liability claims based on user speech.
These two cases, Tamneh and Gonzalez, could both change that immunity in a number of ways. If you just look at Gonzalez, which directly deals with Section 230, the plaintiff is asking the court to say that there is no immunity when a platform makes recommendations and does personalized targeting of content. If the court had to answer just the question at hand, we could be looking at a world where suddenly platforms are responsible for what’s in a classified news feed, like Facebook or Twitter, or that for all that is offered. YouTube, what the Gonzalez case is about.
If they lost their immunity to those functions, we would suddenly find that the most used parts of Internet platforms, or the places where people actually go and see what other users are saying, are suddenly very blocked or very restricted to only the most secure. the content. Maybe we wouldn’t get things like the #MeToo movement. Maybe we won’t get police videos really getting seen and spreading like wildfire because people are sharing them and they’re showing up in rated news feeds and as recommendations. We could see a very big change in the types of online speech available mostly on the front page of the Internet.
The trouble is that in these cases there is a really terrible, terrible, dangerous word. The cases involve plaintiffs whose family members were killed ISIS attacks. They strive to get content that disappears from these feeds and recommendations. But much other content will also disappear in ways that affect free speech rights and have disparate impacts on marginalized groups.
So the plaintiffs’ arguments boil down to the idea that internet platforms or social media companies don’t just passively allow people to post things. They package them and use algorithms and promote them in specific ways. And so they can’t just wash their hands and say they have no responsibility here. It is true:
Yeah, I mean their argument has changed dramatically even from one short to the next. It’s a little hard to pin down, but it’s close to what you said. Both sets of claimants lost family members ISIS attacks. Gonzalez went up to the Supreme Court as a question of Article 230 immunity. And the other, Taamneh, goes up to the Supreme Court as a matter of the following directions. The underlying law is the Anti-Terrorism Act.
It sounds like you really have some concerns about these companies being held responsible for anything posted on their sites.
Absolutely. And also that they are responsible for anything that is classified and enhanced or algorithmically designed part of the platform, because that’s basically everything.
The implications seem potentially harmful, but as a theoretical idea, it doesn’t seem crazy to me that these companies should be held accountable for what’s on their platforms. Do you feel that way, or do you think it’s actually too simplistic to say that these companies are responsible?
I think it’s reasonable to put legal liability on companies if it’s something they can respond well to. If we believe that legal responsibility can force them to accurately identify illegal content and remove it, that’s when it makes sense to put that responsibility on them. And there are situations under US law where we put that responsibility on the platforms, and I think that’s right. For example, there is no immunity from federal criminal prosecution under federal law or Section 230 for child sexual abuse material. The idea is that this content is so incredibly harmful that we want to hold the platforms accountable. And it’s extremely recognizable. We’re not worried about them accidentally taking out a whole bunch of other important speeches. Similarly, we as a country choose to prioritize copyright as the harm that the law responds to, but the law puts in place a number of processes to try to prevent platforms from willfully or unwittingly destroying anything that is risky or someone accuses;
So there are situations where we put the onus on the platforms, but there’s no good reason to think that they’re going to do a good job of identifying and removing terrorist content in a situation where immunity just goes away. I think we would have every reason to expect that a bunch of legitimate speech in that situation would disappear about US military intervention in the Middle East or Syrian immigration policy because the platforms might worry that it might create liability. And the speech that disappears will disproportionately come from people who speak Arabic or who speak Islam. There are a number of these very predictable problems with putting this particular set of legal responsibilities on platforms given the capabilities that they have right now. Maybe there’s a future world where there’s better technology or better involvement of the courts to decide what happens, or something where there’s less worry about unintended consequences, and then we really want to put obligations on platforms. But we are not there now.
How has Europe responded to these issues? They seem to be putting pressure on tech companies to be transparent.
Europe recently had the legal situation these plaintiffs are demanding. Europe had one major piece of legislation governing platform liability, which was passed in 2000. It’s called an e-commerce directive. And he had this very crude idea that if platforms “know” about illegal content, then they should take it down to maintain integrity. And what they found, not surprisingly, was that the law led to a lot of bad faith accusations from people trying to silence their competitors or people they disagreed with online. This results in platforms willing to remove too many items to avoid risk and inconvenience. And so European lawmakers overhauled it in a law called the Digital Services Act to get rid of, or at least try to get rid of, the risks of a system that tells platforms they can make themselves safe by silencing their users.