I find that 99% of ads are blocked simply by disabling Javascript. Does that suggest that disabling Javascript is unethical? Or does it suggest that those blocked advertisements were over-stepping the bounds of the implicit contract?
That's a serious accusation. Can you elaborate? What is the name of the company? Why does the Wikimedia Foundation claim ownership? And if you're referring to the Wikimedia Foundation, then what do you mean by "shareholders"?
I also can't imagine calling our current era "hyper-regulation" with respect to software. The Microsoft antitrust case was only 16 years after Windows 1.0, but this year will be the 25th since then.
Skimming the actual text of the law[1], I don't see anything particularly objectionable. Basically it requires a toggle when creating/editing a local user account that signals "this user is/is not a child". Applications could then tailor their content for child/not child audiences.
Which isn't to suggest that it's a good law, just not really "age verification".
> good faith effort to comply with this title, taking into consideration available technology and any reasonable technical limitations or outages
could easily be read as meaning "facial recognition technology exists and is available, not using it is a business decision, failure to use it removes the good faith protection".
If the lawmakers didn't intend this, then they didn't need to add all the wiggle words that'll let the courts expand the scope of this law.
My first reaction is that this is an insanely bad law:
* The signal has to be made available to both apps and websites
* So if you dutifully input valid ages for your computer users, now any groomer with a website or an app can find out who's a kid and who isn't. You just put a target on your kid's back.
* A fair share of parents will realize this, and in order to protect their children, will willfully noncomply. So now we'll have a bunch of kids surfing the net with a flag saying they're an adult and it's okay to show them adult content.
* Some apps/websites will end up relying on this signal instead of some real age verification, which means that in places like porn sites where there's a decent argument for blocking access from kids, it'll get harder. Or your kid will get random porn ads on websites or something.
So basically unless this thing is thrown out by the courts, California lawmakers have just increased the number of kids who get groomed and the number of kids who get shown porn.
I'm not sure what the solution is, but to steel man a bit, the alternative is kids have access to all the adult spaces, where they will be groomed. A website/app serving grooming content to a kid is just so incredibly unlikely compared to a kid being groomed as the result of having unrestricted access.
Since I do not see a solution, and you see identifying children as a risk, what do you see as a solution for kids being in the same spaces as adults? Do you see a reasonable implementation to separate them, that doesn't have the "we know which accounts are children" problem? Maybe there's something in between?
Also, I think it's important to understand the life of a modern child, who's in front of a screen 7.5 hours a day on average [1], with that increasingly being social media, half having unrestricted access to the internet [2].
I hate government control/nanny state, but I think 5 year olds watching gore websites, watching other children die for fun, is probably not ok (I saw this at the dentist). People are really stupid, and many parents are really shitty. What do you do? Maybe nothing is the answer?
So say one of the 50% of children that have unrestricted access goes somewhere they shouldn't, or interacts with people they shouldn't. How is it detected so the parents can be held liable? What does the implementation look like to you?
As the problem is adults trying to groom kids, the answer is robust detection and enforcement of the current anti-grooming laws.
It's ironic that people supposedly care about this when there's also a child rapist/murderer being kept safe as President without being held accountable for his crimes.
I suppose this law could be used as a defense against getting caught grooming minors - "I thought they were adult as surely a kid wouldn't be able to access that chat group"
How, exactly, does one accomplish "robust detection of a child"? I assume your answer would include complete surveillance of all internet communication? Could you expand on your idea of the implementation?
Sorry if I wasn't clear - I am proposing that the adults face the robust detection and enforcement of anti-grooming laws. One method is to set up honey-pots with law enforcement officers playing the part of an innocent child (i.e. avoiding entrapment) and then throwing the full weight of the law behind any adult showing predatory behaviour.
What I propose is rather than putting all the effort into preventing children from entering dangerous adult spaces, it's better to put the effort into ensuring that sex criminals are prosecuted and trying to make adult spaces less dangerous.
I think an obvious problem for this method is scaling, partly from grooming not being a local phenomenon. It would require worldwide cooperation, especially in a few countries that are statistical offenders.
Instead, websites should voluntarily put content ratings on their own stuff--most would because either they don't intend to harm children, or from societal pressure.
Then, software on the user's computer can filter without revealing any information about the user.
> So if you dutifully input valid ages for your computer users, now any groomer with a website or an app can find out who's a kid and who isn't. You just put a target on your kid's back.
I'm not going to say that's impossible but the number of sites that do the right thing and reduce risk are going to vastly outnumber that. And 90% of those kids already have targets on their backs by virtue of the sites they visit.
> What risk exists from sites that are doing to do the right thing?
To be clear, I'm talking about sites for adults that are doing their best right now, but have no idea who is 18 and who is 8. If they have communication between users, it's not set up to be filtered and moderated in a way that protects an 8 year old. If they could cut out a big majority of 8 year olds with the flip of a switch, that would be a good thing.
That's a lot of risk that exists right now and could be reduced.
> This smells strongly of I just made it harder for those that do the right thing and did nothing to solve any problem.
There is no meaningful difficulty in storing two bytes of extra data on the OS account and turning it into a two bit flag that programs can access and pass on to websites. And for most websites that let users communicate it makes their job a lot easier, even if the flag isn't always right.
I'd argue it's not meaningless because the point wasn't to show inclusion but power. Nobody went for master's degrees, "master" as a rank in video games, or anything else.
Reminds me of [1]twitch.tv trying to remove "blind playthrough" as a tag to encourage inclusive language.
So what? Your proposal is to change nothing, continue as is, and subtly continue using terms like "blacklist" as something bad and "whitelist" as something good... I don't think I understand your point. I don't see any real sense in it.
Unfortunately, MANY people still think this is nonsense and shouldn't be given attention. What you don't understand is that you subtly say that things from Black people are bad and things from white people are good. Do you know what that causes in the end?
A company "of Black people" applies for YC and has a higher chance of being rejected than a company of/for white people, even if it's a necessary solution. You doubt it? Try it!
No, I'm not proposing to change nothing, continue as is, nor do I use coded language to express my secret inner racism.
I'm saying that changing words like "blacklist" or "master" is purely performative and actually quite selfish. People do it to feel good about themselves for "helping" without actually having to do anything helpful. It's the moral equivalent of sending "thoughts and prayers".
I'm not saying that those who use these terms are racist. I'm saying that language evolves. If there are equivalent technical alternatives that don't carry a history of oppression, why not use them? It costs nothing and can make the environment more inclusive. This doesn't replace concrete actions, but it also doesn't prevent them from happening.
If changing a word is "purely performative," then keeping it is also purely performative. The difference is that one choice preserves a metaphor of domination and the other does not. Technology is made of choices. This is one too.
That means I won't bother fighting changes that became established before I was born. I most definitely doesn't mean I have to go along with every change I see proposed now.
> If there are equivalent technical alternatives that don't carry a history of oppression
No one chose to be born in a certain context.
But everyone participates in the context that they continue to feed or transform.
Do you recognize that you live in a system that produces racial inequality today?
If the answer is yes, then there is some level of participation, albeit minimal.
Because living in a structure is already being inside it.
If you are white, your ancestors did this. They created separation and made simple words dehumanize people. So yes, you and everyone else has a chance to make amends. The choice is yours.
Notably, the SFTP specification was never completed. We're working off of draft specs, and presumably these issues wouldn't have made it into a final version.
reply