Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> porn/violence would be pretty terrifying.

uh have you seen American/European mainstream pornography? it's already pretty violent (ex. face slapping, choking, kicking, extreme bdsm).

I just don't see why this stuff is allowed and protected by the law (if its not recorded and published its illegal) and then we are suddenly concerned about what text can do.

Just one of the many double standards I see in Western society.



> uh have you seen American/European mainstream pornography? it's already pretty violent

That's not at all what I am talking about. What I am saying is that such a model would give everyone the ability to create extremely realistic fake images of someone else within a sexual/violent context, in one click, thanks to inpainting. This can become a hate/blackmail machine very fast.

Even though Dalle-2 is not trained on violence/porn it still forbids inpainting pictures with realistic faces that have been uploaded by users to prevent abuse, so now imagine the potential with a model trained on porn/violence.

Someone is eventually going to do it, but back to your initial question about why it's still not done yet, I believe it's because most people would rather not be that someone.


One example risk is someone using computer-generated content to extort money, demand ransom, etc. The cheaper and easier this becomes, the more likely it is to be weaponized at scale.


but wouldn't the ability to auto-generate blackmail material mean the value of blackmail would fall? Just from a supply and demand perspective, it makes sense to me why a deepfaked kompromat would put serious discount on such material especially if everybody knows it was generated by an AI.

Someone like Trump would just shrug and say the pee tapes are deepfaked. I don't think its possible for AI to bypass forensics either. So again this narrative that "deepfake blackmail" would be dangerous makes no sense.


I think it's less for Trump level people and more for basic scams. Imagine just automating a Facebook bot to take someone's picture, edit it into a compromising scene, and message them saying you'll share it with their friends if they don't send you some Bitcoin. This gives you a scalable blackmail tool.

Of course, after a while it'll probably stop working, but there will be a period of time where it can be done profitably and a longer period where it will be obnoxious.

And, of course, you could probably always use the tool to scare children, who, even in the future, might not know that everyone would shrug off the generated pictures.


seems like cryptocurrency is the problem


You mean like gift cards are used in the same manner?


The cheaper and easier this becomes, the more likely it is to be weaponized at scale.

...and the more people will be aware of and stop believing in the "fake reality".

Ensuring this technology is only available to a tiny subset of the population is to essentially give all the power of distorting reality to that tiny group of people.

In fact, I suspect that is precisely the reason.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: