Pushing §230 Off Its Pedestal
I'm late to the game on this one, but WIRED published a "skeptical" feature on everyone's favorite §230 back in May by Gilad Edelman titled "Everything You've Heard About Section 230 Is Wrong" (unfortunately still paywalled).
I consider myself a §230 fanboy slash apologist, and think it's one of the greatest piece of internet-related legislation in history. But I have to confess that Edelman's piece definitely made me rethink my position.
Before the internet, the distinction between publisher and distributor was fairly well established for legal purposes. Publishers could be held liable for content because it's reasonable to assume they are aware of it and thus on notice of any harm it may cause. While distributors (think booksellers) would not be liable because it's reasonable to assume they have no idea what's in it. So if an author made a libelous statement, the author could be sued and so could their publisher, but the bookshop carrying the book couldn't be sued. Seems reasonable enough.
When the internet showed up, the law tried to shoehorn it into the existing legal paradigm and ran into some problems. Two cases set the scene. In 1991, CompuServe was sued for hosting defamatory content within its forums. But the court ruled that because CompuServe did not moderate its content, it was really more like a distributor, and therefore shouldn't be liable unless it either knew or had reason to know that it was hosting defamatory content. Seems reasonable enough.
But then in 1995 Prodigy Services was sued for hosting allegedly defamatory content accusing Stratton Oakmont of securities fraud (yes, the same one in Wolf of Wall Street which meant the allegation of fraud was very likely true). Because Prodigy actually did moderate its content, it therefore exercised "editorial control" in a similar manner a publisher would, and the court held it liable for any potentially defamatory content it hosted.
The Prodigy ruling seriously spooked people concerned about the health of the still nascent internet. This created what was known as the "moderator's dilemma": the choice was between 0% moderation and 0% liability OR >0% moderation and 100% liability. Given those constraints, it seemed fairly clear there'd be absolutely no incentive to moderate anything. Barely 6 months later §230 was passed and made into federal law. That's where we get the hallowed 26 words:
"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
Soon after, judges of all political persuasions interpreted the words literally and rather expansively. It quickly morphed as a robust form of legal immunity but relegated solely to "interactive computer service" providers.
I don't think I'm aware of any attempts to defend the Prodigy ruling today. If that ruling remained law, then it seems fairly obvious that the majority of the internet as we know it today, fueled and driven primarily by user submitted content, would not exist outside a few small niche platforms which could exercise reasonably tight control over their content. But Edelman convincingly argues that even if you accept the Prodigy ruling as misguided, it doesn't mean it has to stay that way. Courts make mistakes, and the common law tradition allows different courts in different jurisdictions to rethink their rulings in the face of new evidence or a change in circumstances. In support of this, Edelman points North. Canada has a common law legal system and does not have anything like §230, but it has plenty of websites that still allow what appears to be robust user generated content. The absence of §230 does not appear to be a death knell to the "internet as we know it", because the court system can reasonable react to changing circumstances.
Edelman also highlights a number of issues that an expansive reading of §230 incurs. In Batzel v. Smith (2003), someone running an email listserv forwarded defamatory statements to their subscribers. Because the person doing the forwarding used "information provided by another", §230 shielded them from liability. I understand it's a literal reading of the law, but I disagree with the ruling because it does not encourage fact-checking or any such due diligence for someone publishing forwarded content. Contrast this to newspapers, which would be found liable if they reprint defamatory statements made by someone else. The other case highlighted is MCW v. RipOff Report (2004). RipOff Report would publish unverified user submitted complaints, play with SEO to show up near the top of search engine results, and also charge maligned business a fee to get rid of the complaints. Doesn't matter how predatory this business scheme is, §230 nevertheless shields them from liability because they're just posting what other people say.
I concur the scenarios highlighted in these two cases are of concern. And while there has not been any strong ruling on this issue, I'm grateful to u/Mr2001 for correcting me and bringing to my attention that §230 very likely would also provide immunity from certain state anti-discrimination laws.
Overall, I think I'm significantly less sanguine about the importance of §230. I think it's still generally a good law, but I have to grapple with the fact that Canada seems to be doing just fine without it, and also that a common law revision allowed to percolate through the courts might have given us a much better tailored set of rules instead of the perhaps too expansive landscape that §230 allows. I wondered if I accepted Edelman's thesis with too much gullibility, so I tried to find opposing viewpoints. Mike Masnick is definitely a §230 evangelist, but I found his "takedown" article to be unconvincing and primarily obsessed with nitpicking irrelevant points.
Yet, despite its status as a bête noire among populist conservatives like Cruz and Hawley, I flatly cannot comprehend that point of view. The complaint is based off the accusation that the big tech platforms are biased against conservatives (not interested in litigating this so I'll just accept it at face value for this post). I've repeatedly encountered proposals from conservatives essentially making a nationalization argument, either explicitly (the government should run Twitter as a 'neutral' public square) or implicitly (the government should force Twitter to operate like a utility subject to regulatory oversight). §230 is repeatedly invoked as the cause of the problem, but I have yet to come across a source that explains exactly how. Rather, it seems the main issue is 1A, since that's what guarantees big tech platforms to moderate speech however they want. Both Cruz and Hawley went to stellar law schools (Harvard/Yale respectively) and both clerked for Supreme Court Justices, so I assume they should know what they're not complete idiots, but the incoherence of their crusade against §230 had led me to conclude that it primarily serves its purpose as TV talking point rather than a serious legal argument.
Edelman also does not take the Hawley/Cruz position seriously, but his article is an excellent entry into the field of §230 skepticism.