Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is the most braindead take and aibros keep pushing. Your 'AI' model is NOT A PERSON and therefore it is different.


Does that include book readers for the blind? They typically have some sort of optical character recognition and benefit a user, just like an ML training dataset benefits users.

My point being: it's exceptionally hard to create laws that deny precisely what you don't want and allow precisely what you want, without quickly getting into details that bring the entire law's assumptions into question. Here being "because an ML training is not a person, it has no right to scan the web".


The main difference here is that these AI bots are operating with an entirely different agenda. The ethics remain to be seen and the jury is out as to whether they will benefit the user they way the promise they will.

Also on a whole different scale and instead of supplementing the web content it’s devaluing it to a degree.


The "ai bots" aren't operating with an agenda- at least as far as we can tell now, training algorithms and their scrapers do not have agency.

Basically you're assuming the agenda of the operator, saying "that's bad an shouldn't be allowed". But I see the web- except for things specifically labelled with standard copyright disclaimers- as effectively a large corpus of publicly available data, "in the market square for all to see".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: