Hacker Newsnew | past | comments | ask | show | jobs | submit | charonn0's commentslogin

I find that 99% of ads are blocked simply by disabling Javascript. Does that suggest that disabling Javascript is unethical? Or does it suggest that those blocked advertisements were over-stepping the bounds of the implicit contract?


Thanks! I was wondering if there was an actual site behind it of if it was just a joke.

honestly this should be updated to the main link, the Anubis at difficulty 8 is astonishingly hostile

Added above. Thanks!

> we tend to take anti-bot measures very seriously

Should have have maybe prioritized differently...


That's a serious accusation. Can you elaborate? What is the name of the company? Why does the Wikimedia Foundation claim ownership? And if you're referring to the Wikimedia Foundation, then what do you mean by "shareholders"?


I don't think regulations are the legal restrictions people are referring to, but rather private lawsuits.


I also can't imagine calling our current era "hyper-regulation" with respect to software. The Microsoft antitrust case was only 16 years after Windows 1.0, but this year will be the 25th since then.


Maybe the OS could ship with preconfigured age-range based usergroups. When you add a new user you could simply add them to the appropriate usergroup.

>> useradd -G under13usergroup username


Skimming the actual text of the law[1], I don't see anything particularly objectionable. Basically it requires a toggle when creating/editing a local user account that signals "this user is/is not a child". Applications could then tailor their content for child/not child audiences.

Which isn't to suggest that it's a good law, just not really "age verification".

[1]: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtm...


The liability exemption is a moving target

> good faith effort to comply with this title, taking into consideration available technology and any reasonable technical limitations or outages

could easily be read as meaning "facial recognition technology exists and is available, not using it is a business decision, failure to use it removes the good faith protection".

If the lawmakers didn't intend this, then they didn't need to add all the wiggle words that'll let the courts expand the scope of this law.


My first reaction is that this is an insanely bad law:

* The signal has to be made available to both apps and websites

* So if you dutifully input valid ages for your computer users, now any groomer with a website or an app can find out who's a kid and who isn't. You just put a target on your kid's back.

* A fair share of parents will realize this, and in order to protect their children, will willfully noncomply. So now we'll have a bunch of kids surfing the net with a flag saying they're an adult and it's okay to show them adult content.

* Some apps/websites will end up relying on this signal instead of some real age verification, which means that in places like porn sites where there's a decent argument for blocking access from kids, it'll get harder. Or your kid will get random porn ads on websites or something.

So basically unless this thing is thrown out by the courts, California lawmakers have just increased the number of kids who get groomed and the number of kids who get shown porn.

Mind boggling that something this bad passed.


I'm not sure what the solution is, but to steel man a bit, the alternative is kids have access to all the adult spaces, where they will be groomed. A website/app serving grooming content to a kid is just so incredibly unlikely compared to a kid being groomed as the result of having unrestricted access.

Since I do not see a solution, and you see identifying children as a risk, what do you see as a solution for kids being in the same spaces as adults? Do you see a reasonable implementation to separate them, that doesn't have the "we know which accounts are children" problem? Maybe there's something in between?

Also, I think it's important to understand the life of a modern child, who's in front of a screen 7.5 hours a day on average [1], with that increasingly being social media, half having unrestricted access to the internet [2].

I hate government control/nanny state, but I think 5 year olds watching gore websites, watching other children die for fun, is probably not ok (I saw this at the dentist). People are really stupid, and many parents are really shitty. What do you do? Maybe nothing is the answer?

[1] https://www.aacap.org/AACAP/Families_and_Youth/Facts_for_Fam...

[2] https://fosi.org/parental-controls-for-online-safety-are-und...


The solution is parental liability.


So say one of the 50% of children that have unrestricted access goes somewhere they shouldn't, or interacts with people they shouldn't. How is it detected so the parents can be held liable? What does the implementation look like to you?


The same way anything illegal is detected: a police report.


You misread my comment.

How is it detected? A police report is for after it's detected.


At the very least, the affected parties would know if a crime has been committed.

Preventing the crime from happening is out of scope of the government, as it should be.


> At the very least, the affected parties would know if a crime has been committed.

The affected parties are unsupervised children, who are accessing adult spaces and content. Are you saying the children will tell on themselves?

Maybe take a moment to re-read this comment chain.


So never.


As the problem is adults trying to groom kids, the answer is robust detection and enforcement of the current anti-grooming laws.

It's ironic that people supposedly care about this when there's also a child rapist/murderer being kept safe as President without being held accountable for his crimes.

I suppose this law could be used as a defense against getting caught grooming minors - "I thought they were adult as surely a kid wouldn't be able to access that chat group"


> robust detection and enforcement

How, exactly, does one accomplish "robust detection of a child"? I assume your answer would include complete surveillance of all internet communication? Could you expand on your idea of the implementation?


Sorry if I wasn't clear - I am proposing that the adults face the robust detection and enforcement of anti-grooming laws. One method is to set up honey-pots with law enforcement officers playing the part of an innocent child (i.e. avoiding entrapment) and then throwing the full weight of the law behind any adult showing predatory behaviour.

What I propose is rather than putting all the effort into preventing children from entering dangerous adult spaces, it's better to put the effort into ensuring that sex criminals are prosecuted and trying to make adult spaces less dangerous.


I think an obvious problem for this method is scaling, partly from grooming not being a local phenomenon. It would require worldwide cooperation, especially in a few countries that are statistical offenders.


Instead, websites should voluntarily put content ratings on their own stuff--most would because either they don't intend to harm children, or from societal pressure.

Then, software on the user's computer can filter without revealing any information about the user.


finding out who is a child online doesn't seem difficult at all to me, and i also doubt groomers big issue online is not finding children.


> So if you dutifully input valid ages for your computer users, now any groomer with a website or an app can find out who's a kid and who isn't. You just put a target on your kid's back.

I'm not going to say that's impossible but the number of sites that do the right thing and reduce risk are going to vastly outnumber that. And 90% of those kids already have targets on their backs by virtue of the sites they visit.


What risk exists from sites that are doing to do the right thing?

This smells strongly of I just made it harder for those that do the right thing and did nothing to solve any problem.


> What risk exists from sites that are doing to do the right thing?

To be clear, I'm talking about sites for adults that are doing their best right now, but have no idea who is 18 and who is 8. If they have communication between users, it's not set up to be filtered and moderated in a way that protects an 8 year old. If they could cut out a big majority of 8 year olds with the flip of a switch, that would be a good thing.

That's a lot of risk that exists right now and could be reduced.

> This smells strongly of I just made it harder for those that do the right thing and did nothing to solve any problem.

There is no meaningful difficulty in storing two bytes of extra data on the OS account and turning it into a two bit flag that programs can access and pass on to websites. And for most websites that let users communicate it makes their job a lot easier, even if the flag isn't always right.


OS’s which dont have user accounts (eg always root) like Haiku and Amiga are going to thrive soon …


> I can't think of any other differentiating features in Gmail. Ads in my mail? Nostalgia?

Originally, the differentiating features were multi-gigabyte storage limits and the public's goodwill towards Google, Inc.

Gigabyte storage is now the norm, public goodwill for Alphabet, Inc. is minimal, and so there's nothing that really sets Gmail apart anymore.


Because changing blacklist to blocklist, master to main, etc. is a meaningless act of virtue signalling.


I'd argue it's not meaningless because the point wasn't to show inclusion but power. Nobody went for master's degrees, "master" as a rank in video games, or anything else.

Reminds me of [1]twitch.tv trying to remove "blind playthrough" as a tag to encourage inclusive language.

1. https://www.reddit.com/r/Twitch/comments/k7dvgw/twitch_remov...


GitHub changed the default branch name from master to main.


So what? Your proposal is to change nothing, continue as is, and subtly continue using terms like "blacklist" as something bad and "whitelist" as something good... I don't think I understand your point. I don't see any real sense in it.

Unfortunately, MANY people still think this is nonsense and shouldn't be given attention. What you don't understand is that you subtly say that things from Black people are bad and things from white people are good. Do you know what that causes in the end?

A company "of Black people" applies for YC and has a higher chance of being rejected than a company of/for white people, even if it's a necessary solution. You doubt it? Try it!


No, I'm not proposing to change nothing, continue as is, nor do I use coded language to express my secret inner racism.

I'm saying that changing words like "blacklist" or "master" is purely performative and actually quite selfish. People do it to feel good about themselves for "helping" without actually having to do anything helpful. It's the moral equivalent of sending "thoughts and prayers".


I'm not saying that those who use these terms are racist. I'm saying that language evolves. If there are equivalent technical alternatives that don't carry a history of oppression, why not use them? It costs nothing and can make the environment more inclusive. This doesn't replace concrete actions, but it also doesn't prevent them from happening.

If changing a word is "purely performative," then keeping it is also purely performative. The difference is that one choice preserves a metaphor of domination and the other does not. Technology is made of choices. This is one too.


> I'm saying that language evolves.

That means I won't bother fighting changes that became established before I was born. I most definitely doesn't mean I have to go along with every change I see proposed now.

> If there are equivalent technical alternatives that don't carry a history of oppression

Words are not oppression.


No one chose to be born in a certain context. But everyone participates in the context that they continue to feed or transform.

Do you recognize that you live in a system that produces racial inequality today?

If the answer is yes, then there is some level of participation, albeit minimal. Because living in a structure is already being inside it.

If you are white, your ancestors did this. They created separation and made simple words dehumanize people. So yes, you and everyone else has a chance to make amends. The choice is yours.


> It costs nothing

It cost time and coding work to make the change.


Whatever the cost! You spend money on trivial things, why not on important things?


Pretty sure you can do it at the individual device level, in the OS's network settings.


My Xfinity gateway blocks DNS unless it's to Comcast's name servers. DNS over http does work in Firefox.

I could work around that by configuring the gateway as just a modem and provide my own router and wifi, but then there are data caps.


Notably, the SFTP specification was never completed. We're working off of draft specs, and presumably these issues wouldn't have made it into a final version.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: