First, children also have a right to free speech. It is perhaps even more important than for adults, as children are not empowered to do anything but speak.
Second, it's turn-key authoritarianism. E.g. "show me the IDs of everyone who has talked about being gay" or "show me a list of the 10,000 people who are part of <community> that's embarrassing me politically" or "which of my enemies like to watch embarrassing pornography?".
Even if you honestly do delete the data you collect today, it's trivial to flip a switch tomorrow and start keeping everything forever. Training people to accept "papers, please" with this excuse is just boiling the frog. Further, even if you never actually do keep these records long term, the simple fact that you are collecting them has a chilling effect because people understand that the risk is there and they know they are being watched.
> First, children also have a right to free speech.
Maybe I'm wrong (not reading all the regulations that are coming up) but the scope of these regulations is not to ban speech but rather to prevent people under a certain age to access a narrow subset of the websites that exist on the web. That to me looks like a significant difference.
As for your other two points, I can't really argue against those because they are obviously valid but also very hypothetical and so in that context sure, everything is possible I suppose.
That said something has to be done at some point because it's obvious that these platforms are having profound impact on society as a whole. And I don't care about the kids, I'm talking in general.
Under most of these laws, most websites with user-generated content qualify.
I'd be a lot more fine with it if it was just algorithms designed for addiction (defining that in law is tricky), but AFAIK a simple forum where kids can talk to each other about familial abuse or whatever would also qualify.
> but AFAIK a simple forum where kids can talk to each other about familial abuse or whatever would also qualify.
I'm currently scrolling through this list https://en.wikipedia.org/wiki/Social_media_age_verification_... and it seems to me these are primarily focused on "social media" but missing from these short summaries is how social media is defined which is obviously an important detail.
Seems to me that an "easy" solution would be to implement some sort of size cap this way you could easily leave old school forums out.
It would no be a perfect solution, but it's probably better than including every site with user generated content.
> I'd be a lot more fine with it if it was just algorithms designed for addiction (defining that in law is tricky)
An alternative to playing whac-a-mole with all the innovative bad behavior companies cook up is to address the incentives directly: ads are the primary driving force behind the suck. If we are already on board with restricting speech for the greater good, that's where we should start. Options include (from most to least heavy-handed/effective):
1) Outlaw endorsing a product or service in exchange for compensation. I.e. ban ads altogether.
2) Outlaw unsolicited advertisements, including "bundling" of ads with something the recipient values. I.e. only allow ads in the form of catalogues, trade shows, industry newsletters, yellow pages. Extreme care has to be taken here to ensure only actual opt-in advertisements are allowed and to avoid a GDPR situation where marketers with a rapist mentality can endlessly nag you to opt in or make consent forms confusing/coercive.
3) Outlaw personalized advertising and the collection/use of personal information[1] for any purpose other than what is strictly necessary[2] to deliver the product or service your customer has requested. I.e. GDPR, but without a "consent" loophole.
These options are far from exhaustive and out of the three presented, only the first two are likely to have the effect of killing predatory services that aren't worth paying for.
[1] Any information about an individual or small group of individuals, regardless of whether or not that information is tied to a unique identifier (e.g. an IP address, a user ID, or a session token), and regardless of whether or not you can tie such an identifier to a flesh-and-blood person ("We don't know that 'adf0386jsdl7vcs' is Steve at so-and-so address" is not a valid excuse). Aggregate population-level statistics are usually, but not necessarily, in the clear.
[2] "Our business model is only viable if we do this" does not rise to the level of strictly necessary. "We physically can not deliver your package unless you tell us where to" does, barely.
The chilling effect of tying identity to speech means it directly effects free speech. The Founding Fathers of the US wrote under many pseudonyms. If you think you may be punished for your words, you might not speak out.
We know we cannot trust service providers on the internet to take care of our identifying data. We cannot ensure they won't turn that data over to a corrupt government entity.
Therefore, we can not guarantee free speech on these platforms if we have a looming threat of being punished for the speech. Yes these are private entities, but they have also taken advantage of the boom in tech to effectively replace certain infrastructure. If we need smart phones and apps to interact with public services, we should apply the same constitutional rights to those platforms.
> If we need smart phones and apps to interact with public services, we should apply the same constitutional rights to those platforms.
Are private social media platforms "public services"? And also, you mentioned constitutional rights. Which constitution are we talking about here? These are global scale issues, I don't think we should default on the US constitution.
> We know we cannot trust service providers on the internet to take care of our identifying data.
Nobody needs to trust those. I can, right now, use my government issues ID to identify myself online using a platform that's run by the government itself. And if your rebuttal is that we can't trust the government either then yeah, I don't know what to say.
Because at some point, at a certain level, society is built on at least some level of implicit trust. Without it you can't have a functioning society.
> Because at some point, at a certain level, society is built on at least some level of implicit trust. Without it you can't have a functioning society.
This is somewhat central to being remain anonymous.
Protesters and observers are having their passports cancelled or their TSA precheck revoked due to speech. You cannot trust the government to abide by the first amendment.
Private services sell your data to build a panopticon, then sell that data indirectly to the government.
Therefore, tying your anonymous speech to a legal identity puts one at risk of being punished by the government for protected speech.
> You cannot trust the government to abide by the first amendment.
Again, this is a global issue. There is no first amendment here where I live. But the issue of the power these platforms have at a global level is a real one and something has to be done in general to deal with that. The problem is what should we do.
It's weird how radicalized people get about banning books compared to banning the internet.