Facebook's Free Speech Dilemma
Facebook recently updated its community standards:
Do not post: Threats that could lead to death (and other forms of high-severity violence) of any target(s) where threat is defined as any of the following: Statements of intent to commit high-severity violence; or Calls for high-severity violence (unless the target is an organization or individual covered in the Dangerous Individuals and Organizations policy, or is described as having carried out violent crimes or sexual offenses, wherein criminal/predator status has been established by media reports, market knowledge of news event, etc.)
The exception to the rule is quite interesting. I seriously do not envy Facebook's job. I'm a free speech absolutist so I wouldn't run a platform this way but once you wade into this territory holy fuck you are just covered in brambles and struggling to take thorns out of your petticoat. There's no way to win.
There's an excellent Radiolab episode describing the history of Facebook's "speech constitution" which I think is a great starting point for this type of discussion. The first major flashpoint was on breastfeeding pictures. Female nipples are not ok. Simple enough, right? But then breastfeeding mothers launched a major protest on Facebook and they capitulated. Now, female nipples are still not ok except in instances of breastfeeding. That would make everyone happy, right? Nope. Soon after that rule, people started posting pictures of topless women with a baby nearby. Facebook then responded with an "attachment clause" where the baby's lips had to be touching the woman's nipples. Problem solved! Nope! Next thing you knew, there were pictures of topless 25 year old women with a teenager sucking on their nipples. Eventually Facebook settled on a quasi-stable equilibrium when they implemented an age cap to the child. But how old is too old? Since the WHO recommends breastfeeding until 18 months, they settled on an arbitrarily drawn line of "if the child can walk on its own, then no breastfeeding pictures."
This is an enormous amount of work to delineate just one type of situation, let alone the infinite flora of human experience that gets posted to the site.
So with all that in mind, I can sort of see what Facebook is trying to do. They don't want death threats, but what if somebody wants to post "Kill all ISIS fighters?" or "All criminals should be hanged". There's a gut check in place where it seems wrong to ban content like that. So Facebook is essentially creating another arbitrary line in the sand.
I can see in this breaking apart very quickly. Is it ok to say "Kill all cops" when posting a news story of a malevolent police officer shooting an unarmed person running away from them? This also leaves open a giant fucking gaping hole for unpopular ethnic groups. If Myanmar's media goes on a rampage and starts accusing the Rohingya people of committing mass violent crimes, is it then ok to post "Kill all Rohingya"? Facebook is really in an unwinnable position. Keep in mind that any rule it wants to change has to fit within every single jurisdiction it operates in. It's a lost cause.
This further compounded by the economy of the enterprise. A single individual may have a coherent grasp of what’s appropriate (say, what counts as “pornography”), but they're also not going to be responsible for the millions of content reports Facebook receives every year. The problem is teaching other people how to see it. The only economically feasible way to moderate content is to farm large portions of it to cheap labor countries like India and Philippines which have vastly different notions of pornography. Even when you have content moderation does domestically, it's impossible to take into account every single instances. They work on a strict time crunch and also are graded on how many instances they miss. How do you enforce something as nebulous as "take down content if it's pornography, but if you're wrong enough times you're fired."?
Does this mean that the solution is to just stop censoring/moderating altogether? The steelman response would mean that every platform quickly turns into either 4chan, Voat, or PornHub with no real way to avoid that equilibrium. You can see a real life example from "Free speech" bulletin boards put up on campus (this is decades ago, back when it was cool) and it quickly got covered up with swastikas and 'nigger' by anonymous users. So how do you keep something usable to a large portion of people when there's no editorial control? If you can point to a working example I'm all ears.