Section 230… What is It and Why Should I Care?
The US Supreme Court is currently hearing oral arguments in Gonzalez v. Google, which marks the first time it will reflect on the immunity granted to online platforms through Section 230 of the Communications Decency Act of 1996.
Wait… Communications are meant to be decent? Why wasn’t I ever told?
(You were. Your momma always told you that if you couldn’t say anything nice, you shouldn’t say anything at all.)
So… what is Section 230 and why should you care? Glad you asked:
What this is: 230 functions as a shield for online platforms that otherwise may be held liable for what their users post. (Think of the defamation lawsuits that Facebook, Twitter, and Reddit would face if they were held responsible for their users’ online statements.) Under Section 230, the individual who posts the content remains responsible while the platform, sheltered by Section 230, avoids liability entirely.
Many people (from all sides of the political spectrum) have taken issue with Section 230. Some believe Section 230 promotes the censorship of free speech, since platform owners are given discretion on how to moderate the content posted on their sites. (Consider, for example, how a certain former President was temporarily banned from Twitter.) Others believe that Section 230 encourages unfettered hate speech and dangerous propaganda (such as, e.g., Russian Twitter Bots).
Additionally, many people view these platforms as actively promoting (or dismissing) content based on complex algorithms. Algorithms are fairly new legal territory; most platforms keep their content-promoting algorithms and formulae under lock and key as proprietary secrets.
However, when platforms are not merely public fora but instead actively engineer what a user experiences, legal questions become increasingly complex. Nohemi Gonzalez was killed during the 2015 ISIS attacks in Paris. Her family has since brought suit against Google, claiming that it promoted ISIS videos through its YouTube algorithm and that promotion goes beyond the content moderation that Section 230 protects.
Why you should care: Section 230 set up the foundational pillars for the internet as we know it. If it hadn’t existed at the emergence of the modern internet, social media sites would look fundamentally different and forums like TripAdvisor and Yelp would cease to exist.
So, what would happen if Section 230 immunity were to be narrowed or repealed? Glad you asked.
Without this immunity, every mean Facebook comment left by a racist, white supremacist, or even estranged aunt could be potential grounds for a defamation lawsuit. (Yes, Aunt Ouisie, I’m talking about you.) How might this change your i-experience?
-
- Established companies with legal teams and budgets to wade through waves of lawsuits would fare far better than startups without the same means. Additionally, without Section 230 protection, user-driven content sites like TikTok would most likely blink out of existence. New and emerging internet fora, due to inherent legal risks, might be judged too risky to maintain.
-
- If the Court finds for Gonzalez and chooses to narrow 230 protections to exclude platform algorithms that boost specific content, the typical user experience would radically change. (For example, your Instagram feed will most likely revert to chronological order.)
-
- Many people make their income through content creation and rely on social media-based algorithms to promote their content to viewers. Their business models would also fundamentally change… or disappear.
Of course, the Court would not be sailing into unknown waters. This wouldn’t be the first time Section 230 was limited: in 2018, Congress created an exception for sex trafficking ads posted to their sites by third parties.
Yes, dear Reader, we live in exceptional times (pardon the pun). We must therefore prepare ourselves for the effects of changes in laws on our (virtual) reality.