These 26 words “created the web”. Now the Supreme Court can come for them
Washington (CNN) Congress, the White House and now the US Supreme Court are turning their attention to a federal law that has long served as a legal shield for online platforms.
This week, the Supreme Court is scheduled to hear oral arguments in two key cases involving the moderation of online speech and content. Central to the debate is Section 230, a federal law that has been heavily criticized by both Republicans and Democrats for various reasons but has been defended by tech companies and digital rights groups as vital to a functioning Internet.
The tech companies involved in the lawsuit cited the 27-year-old statute as an argument for why they shouldn’t face lawsuits alleging they knowingly provided substantial assistance to terrorist activities by hosting or algorithmically recommending terrorist content.
A series of rulings against the tech industry could significantly narrow Section 230 and its legal protections for websites and social media companies. If that happens, the Court’s rulings could expose online platforms to a host of new claims regarding how they present content to users. Such an outcome would represent the most consistent restrictions ever placed on the legal shield that predates today’s biggest social media platforms and has allowed them to fend off many content-related lawsuits in the first place.
And there could be more: the Supreme Court still thinking have heard several additional cases involving Section 230, while members of Congress have expressed renewed enthusiasm for rolling back the law’s protections for Web sites, and President Joe Biden has said: called for the same in a recent article.
Here’s everything you need to know about Section 230, the law known as the 26 words that created the Internet.
A law born in the early days of the World Wide Web
Section 230 of the Communications Decency Act, passed in 1996 during the early days of the World Wide Web, was meant to nurture startups and entrepreneurs. It text of legislation realized that the Internet was in its infancy and was in danger of dying out if website owners could be sued for things other people posted.
One of the law’s architects, Oregon Democratic Senator Ron Wyden, said that without Article 230, “all online media would face frivolous lawsuits and pressure campaigns from the powerful” to try to silence them.
He also said that Section 230 expressly allows websites to remove content they deem objectionable, creating a “Good Samaritan” safe harbor; the federal government can still sue platforms for violating criminal or intellectual property laws.
Contrary to what some politicians claim, Section 230 protection does not depend on the platform being politically or ideologically neutral. The law also does not require a website to be classified as a publisher in order to “qualify” for liability protection. Other than meeting the definition of “interactive computer service,” websites do not need to do anything to receive the benefits of Section 230; they are applied automatically.
A central provision of the law states that websites (and their users) cannot legally be viewed as publishers or speakers of other people’s content. In plain English, this means that any legal liability associated with the publication of that content rests with the person or organization that created it, not the platforms on which the content is shared or the users who re-post it. spread it.
Section 230’s seemingly plain language belies its sweeping effect. Courts have repeatedly recognized Section 230 as a defense against defamation, negligence, and other charges. In the past, it protected AOL, Craigslist, Google, and Yahoo, creating a body of law so broad and influential that it is considered a pillar of today’s Internet.
“The free and open Internet as we know it could not exist without Section 230,” digital rights organization Electronic Frontier Foundation, wrote: “Significant court rulings on Section 230 have ruled that users and services cannot be sued over email. to send emails, host online reviews, or share photos or videos that others find objectionable. It also helps to quickly resolve court cases that have no legal basis.”
In recent years, however, critics of Section 230 have increasingly questioned the scope of the law and proposed restrictions on the circumstances under which websites can invoke the legal shield.
Bipartisan criticism for various reasons
For years, much of the criticism of Section 230 has come from conservatives who say the law allows social media platforms to suppress right-wing views for political reasons.
Protecting platforms’ freedom to moderate content as they see fit, Section 230 shields websites from lawsuits that could arise from that type of content-based moderation, even though social media companies have said they don’t make content decisions based on ideology. on, but rather. regarding violations of their policies.
The Trump administration tried to turn some of those criticisms into concrete policies that could have significant consequences if successful. For example, the Department of Justice in 2020 published a legislative proposal For amendments to Section 230 that would create an eligibility test for websites seeking protection under the Act. In the same year, the White House released it executive order urging the Federal Communications Commission to interpret Section 230 more narrowly.
The executive order faced a number of legal and procedural problems, the most important of which was that the FCA is not part of the judiciary; that it does not govern social media or content moderation decisions; and that it is an independent agency that, by law, does not direct from the White House.
While Trump-era efforts to cut Section 230 never came to fruition, conservatives are still looking for opportunities to do so. And they are not alone. Since 2016, when social media platforms’ role in the spread of Russian election disinformation opened a national dialogue about companies’ handling of toxic content, Democrats have increasingly railed against Section 230.
By protecting platforms’ freedom to moderate content as they see fit, Democrats say, Section 230 has allowed websites to avoid liability for hate speech and misinformation that others have recognized as objectionable but social media companies are unable or unwilling to remove. themselves.
The result is a bipartisan hatred of Section 230, even if the two sides can’t agree on why Section 230 is flawed or what policies might adequately replace it.
“I would be willing to bet that if we were to vote on a simple repeal of Section 230, it would shut down this committee on almost every vote,” Sen. Sheldon Whitehouse, D-Rhode Island, said at a Senate Judiciary hearing last week. committee “The problem where we get bogged down is that we want more than 230, we want to cancel 230 and then have ‘XYZ.’ And we don’t agree on what ‘XYZ’ is.”
The courts are leading the way
The impasse has spurred the courts to change Section 230, especially the U.S. Supreme Court, which now has the ability to dictate how far the law extends.
Tech critics have called for additional legal disclosure and accountability. “The massive social media industry is largely shielded from the courts and a range of normal developments in the law. It is highly irregular for a global industry with staggering influence to be immune from judicial investigation,” the Anti-Defamation League wrote. in Summary of the Supreme Court.
For the tech giants and even many of Big Tech’s fiercest competitors, that would be a bad thing because it would undermine what has allowed the Internet to flourish. They say it could put many sites and users at unwitting and sudden legal risk, and it would dramatically change how some sites operate to avoid liability.
The social media platform Reddit argued a Summary of the Supreme Court that if Section 230 were to be narrowed so that its protections did not extend to a website’s recommendations about content that a user might enjoy, it would “drastically expand the potential for Internet users to be sued for their online interactions.”
“Suggestions are what make Reddit a vibrant place,” wrote the company and several volunteer Reddit moderators. “It’s users who upvote and downvote content, thereby deciding which posts get featured and which get obscured.”
People will stop using Reddit and moderators will stop volunteering, the brief said, under a legal regime that “carries a serious risk of being sued for offering defamatory or otherwise offensive posts created by someone else.”
While this week’s oral arguments won’t be the end of the debate over Section 230, the outcome of the cases could lead to massive changes the likes of which the Internet has never seen before, for better or worse.