You're being snarky, but this is essentially true (as I understand it).
In the US, outside of a few legally protected characteristics, a company is free to exercise its speech to choose not to provide service to customers it does not wish to.
The government forcing companies to provide service would be a violation of their speech rights.
Some companies go out of their way to try to be better than that because they recognize the influence they have on public access to platforms. (Zuckerberg cares a lot about this: https://zalberico.com/essay/2020/06/16/mark-zuckerberg-and-f...), but ultimately all of that is above and beyond what the legal expectations are.
In FB's case it wasn't so much a policy change that lead to them shutting down Trump's account, but that USG's position as a rule of law nation was in question after 1/6 and it no longer met their own policy requirements that give elected politicians in rule of law democracies a special pass to violate their moderation rules.
The irony is this is a variant of the same argument made in the gay cake case, but with the political partisanship reversed. The notable difference in that case is that sexual orientation is a protected characteristic.
"The government forcing companies to provide service would be a violation of their speech rights."
-- True, but the government can institute a bill of rights for digital platforms, and make it mandatory for companies that want to have section 230 protection.
Youtube will be free to choose between following the bill of rights, or becoming a regulated media like the TV, or being prone to defamation lawsuits and be responsible for its content.
What that bill of rights would look like? That's a different story... but google, or any tech company, having the power to just delete your gmail/email account, and erase all your stored data at will it is kinda scary.
There should be a happy medium. We do this for evictions already (i.e. there is a process for the eviction, we should do something lightweight for the digital space as well). Eg, if google has determined you have abused on your email, you will have 30 days to retrieve your data, and contacts and transfer them somewhere else. Losing access altogether is very disruptive.
> the government can institute a bill of rights for digital platforms, and make it mandatory for companies that want to have section 230 protection.
Social Media companies actually want this as well because then they can just deflect all unpopular decisions to this bill of rights. That's why Facebook spent $700 million to create their own content moderation "supreme court."
Congress had every opportunity to explore this issue further during the Congressional hearings about section 230. Unfortunately, they were more interested in whining about why this specific issue should be censored or why this specific issue should not be censored.
The irony is this is a variant of the same argument made in the gay cake case, but with the political partisanship reversed. The notable difference in that case is that sexual orientation is a protected characteristic.
The other far more notable difference is the baker didn't own all the cake shops.
And the "gay cake case" would also be closer to not just providing cork board space for people to stick pins and messages in; but also to require the engineers or CEO herself to write down the words that they put on the bulletin board, after they're orated to her.
The debate is not whether there is a legal requirement for YouTube to provide freedom of speech (categorically, there is not) but whether that is harmful to society. The concept of freedom of expression extends beyond the bound of US law.
Exactly. It makes me uneasy to see all these people who seem to have no idea that freedom of speech is or could be anything other than a particular law. (That there are benefits to having an unrestricted exchange of ideas in public, and that censoring some ideas is a slippery slope to censoring others.) If that law didn't exist, would it occur to them to create it?
The government compelling speech from private companies is a speech violation.
Arguably private companies and individuals should adapt to choose better services that suit their interests. I'm not sure that compelling company speech is a good idea and gets messy quickly (particularly around moderation).
I'm a fan of Urbit which I think is a clever model that makes all of this largely unnecessary.
I see. Then I think you're misunderstanding the position of those you're arguing against. Well, I don't know exactly what throwaway1959 believes, but he criticized Youtube's CEO for "censoring opinions of other people that she does not agree with". He did not say that the government should do anything about it; that was something you mentioned.
I don't know if this is a common misconception or if something else is going on, but it's fairly common for one person to say it's bad for a company to suppress free speech, and someone else to reply that it's not illegal for the company to do that, as if that were a counterargument. Is it believed that saying something is bad = saying it should be illegal? Not only is that a bad policy, it contradicts the ideal of freedom of speech itself: that bad speech should be allowed. I don't understand how someone could believe a free speech advocate would think that (other than by understanding them poorly or having a low opinion of their logical consistency).
For illustration, here's an entire article titled "The YouTube Ban Is Un-American, Wrong, and Will Backfire", 2300 words—none of which says that Youtube's actions are illegal, or should be, or even mentions the First Amendment. I think this is the position of free speech advocates generally: that suppressing free speech is bad and that those who do it should be criticized and shamed but not punished (unless it's the government, in which case it may be illegal). https://taibbi.substack.com/p/the-youtube-ban-is-un-american...
I think I understand the position, but the implication is often that it should not be allowed or that it is bad.
The core idea is I think YouTube should have the power to moderate their platform and that that is a form of legally protected speech (a private company deciding how to run itself).
People can disagree with YouTube's policies or moderation, but that's a personal opinion about their policies with regard to what they allow on their platform. I don't think it's that big of a deal for them to block stuff that makes the platform a worse place to be and they have the power to determine what that is.
If they block stuff I find interesting, for example if they blocked videos about cryptocurrencies, I would think that's a dumb rule and I wouldn't want to support them on that, but they should have the ability to make that decision. If they blocked critical videos about China I would think that's unethical and wrong. Their ability to Block/moderate in general though? That's a tool they should have and should use.
Private companies setting policy on what's allowed on their platforms is exercising a form of speech. I don't think private companies should be forced to allow any speech from any users or that that is even desirable. Communities without moderation suck.
The government should not compel speech - compelling companies to provide services is a form of violating free speech. I find this more objectionable than YouTube blocking stuff that violates their ToS.
I also think "censoring opinions of people that she does not agree with" is misleading enough to be false. Banning people like Steve Bannon or Alex Jones is not some sort of ban of good faith intellectual disagreement - it's just banning trolls.
This presumed that private corporations have rights. I’d posit that they do not. They are legal, government created entities, which is different than the individual persons the Bill of Rights was written for. Furthermore, why would we want to grant the power to limit speech to private corporations when we don’t allow governments to do so? The latter is a democratically elected group while the former is almost the complete opposite.
These corporations are as big as governments, but y’all want to give them the rights of individuals. Only they’re run like top-down oligarchies with little to no accountability for their actions.
Well, we agree that the company should be allowed to set whatever policies they want, ban whoever they want, etc., and that it would violate their rights—arguably speech rights—for the government to compel them to unban anyone in particular (unless they signed a contract and the banning violated that contract, but I'm sure they have all the necessary CYA clauses in anything they sign).
That said...
> I think I understand the position, but the implication is often that it should not be allowed or that it is bad.
I just said that "it should not be allowed" and "it is bad" are very different things, and I think it's important to maintain that distinction. It's not entirely clear to me whether you do. (If you do, then I think you'd agree that it's a rather uncharitable assumption to make of one's interlocutor that his criticism was meant to imply "and the law should stop them" when it could just as easily have meant "they suck and I want everyone to know that".)
Here's an incomplete list of things that are bad but that the law should never disallow: mediocre parenting, getting unhealthily fat, insulting strangers for no reason, getting into a romantic relationship with someone you know wants something long-term and planning to dump them soon, cutting corners in products, making bloated websites, supporting political causes I don't like, encouraging people to invest in programming languages I don't like, etc.
There are pragmatic reasons to not disallow those things. In many cases, enforcement would necessarily be (a) very subjective, and therefore open to abuse, (b) require a very powerful police state, and/or (c) create a slippery slope of "Well, if we already disallow that, then surely also ..." that tends toward totalitarianism. (I suspect those who think "bad" = "should be disallowed" will end up convincing themselves either that a lot of these things should be outlawed, and become totalitarian morality police, or that a lot of these things aren't really bad, and become amoral. I suspect that this actually happens to some degree, but that the damage is limited because many people don't think very much about their beliefs.)
However, if that's your only defense against such policies, then you're vulnerable to special pleading. "This particular description of 'bad behavior' sounds objective, and at least the first step towards policing it doesn't require any significant police state growth, and of course we have no intention of extending the policy further [or of getting replaced in the next election by those who would]." If you're not as sophisticated as your opponent, or less prepared on a particular issue, or if your audience finds the concept of police state growth laughable rather than scary, then you may be caught flat-footed in a debate.
Instead, we have a conception of legal property rights and what counts as "violence" that more or less decides all these issues. Legal encroachments on these rights are bad, and at the very least they require extremely rigorous justification, which should be rarely achieved in practice. I think this is the only stance that is likely to hold up long-term against special pleading; though I think the majority of people aren't properly educated about the merits of this stance, and they tend to elect politicians that cheerfully grab power to fight the villains of the quarter (Terrorists! Copyright pirates! Child abusers! Insurrectionists! Rioters!).
> If they blocked critical videos about China I would think that's unethical and wrong.
I agree, but do you think that should be illegal? (I don't.) And if not, then what recourse do people have? Publicly complaining about it to try to change Youtube's mind seems like an obvious thing to try; beyond that, make or join an alternative to Youtube and try to convince others to do so. This is difficult to make work, partly due to network effects.
I do suspect that there are legal barriers to entry (that strengthen the network effects) that should not exist. For example, what if I made a Mytube, which did its best to interoperate with Youtube? E.g. it would show the like counts and comments of Youtube users, and if you commented on Mytube then it would show up on Youtube. Users could have a unified client that would see everything on Youtube as well as Mytube, and for the most part not notice or care which website stuff was actually stored on. I suspect Youtube would claim that using the unified client violated their terms of service; I think this is where we might say that the website cannot make a legally enforceable distinction between a user clicking buttons on a browser to send HTTP requests to their website, and a user clicking buttons on a unified client to send HTTP requests to their website, and that while they can try to detect when the unified client renders their webpage in an embedded browser and scrapes the DOM for the like counts and such, it's a cat-and-mouse game they might not win.
But if that's impossible, then making very strong public criticism seems like the main tool that disgruntled users have against a monopolistic platform. Do you have other suggestions?
It's basically an application of freedom vs liberty. You are less free as a result of having to follow the rules of the road but have greater liberty by being able to travel safely and efficiently.
This balance is governance's equivalent of "three hard problems."
When it comes to platform moderation it seems like HN at large overwhelmingly holds the opinion that for speech liberty is maximized with freedom. Which might be right but for me personally I've experienced the reality not being able myself on the internet because of all the hate I've received. The solution has been for my whole life has boiled down to "just pretend to be a straight white cis American man" to the point of having to use a voice-changer to play videogames online. Like it does work and solves the immediate problem. But the truth is also that it has gotten a lot better in recent years. We're finally out of the "there are no girls on the internet" dark ages and I don't know if the ends justify the means but something is working.
Your platform may be legally allowed to censor everything, but it's clear it doesn't take freedom of expression seriously.
If we add some more mental gymnastics, we can say the Chinese government supports all legal freedom of speech as well. If this sounds ridiculous
but the former does not, we should evaluate why one type of censorship means freedom but another means oppression.
That element has already been evaluated, and is re-evaluated every time this argument is made.
A private platform is free to censor users as they see fit, as the users are free to A) not use their platform at all, B) use a competing platform, or C) create a competing platform. You are not compelled to post your thoughts on FB, or Hackernews, or anywhere else, nor are you entitled to a platform to do so there.
The flip-side, censorship by gov't, is platform independent. This is where the 1st amendment protects come into play, defending you (and the hosting platform) from gov't suppression regarding political or religious speech. If the USA hosted an official message board, then we could argue comparisons to China.
It wasn't always like that. We used to have rules like common carrier restricting the phone company, the fairness doctrine restricting television networks, and campaign finance rules restricting other media.
We truly live in the golden age of corporate power.
> Some companies go out of their way to try to be better than that because they recognize the influence they have on public access to platforms.
If they have that amount of public influence they should be broken up, or heavily regulated as public utilities, so that they uphold freedom of speech online in the new digital public town square.
I always wonder what makes YouTube/Twitter, in particular its so-called reviewers or fact-checkers, think that they are on the right side of the history? They are not necessarily better informed, they are not necessarily better educated, and they are not necessarily better at research, they are not necessarily better at interpreting stats. In fact, I question if many of them understand scientific research at all. Yet they had no problem banning a person for citing a research from Stanford that questions the effectiveness of masks. Yet they had no problem declaring Florida's governor was spreading misinformation in a round-table discussion. Yet they had no problem shutting down anyone a year ago who supported wearing mask or staying at home, and shutting down anyone now who claims the opposite now.
In fact, YouTube, according to BBC, bans any coronavirus-related content that directly contradicts World Health Organization (WHO) advice. Yet isn't it true that WHO gave really bad advice in the early days of Covid-19? Isn't true that Taiwan ignored WHO's advice and implemented strict containment policy to a great success? Isn't Thailand representative harshly criticized WHO for their dubious corroboration with Chinese government? What makes YouTube think that they possess moral superiority over other people?
I think the reviewers are mostly low on the tech totem pole young people, as ignorant of history or anything outside of their neighborhoods as they are certain about their virtue. You know, as young people have always been. That they get to police what everybody else can say is a serious problem.
They know they're frauds pushing a political agenda, and they know if they and their conspirators repeat the same lies over and over, the masses will believe them.
I think there is more nuance to this. The problem is that YouTube literally claim "Our mission is to give everyone a voice and show them the world." not the selectively suppress the views of certain groups. So I think it's closer to false advertising.