Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Also libel requires "actual malice" which means the libeler needs to know that their statements are false. Good luck showing that with an LLM.

Isn't that easily proven by the text at the bottom of chatgpt: "ChatGPT can make mistakes. Consider checking important information."



Disclosing that the information is prone to error is the opposite of a malicious lie


ChatGPT contains the disclaimer, proving that OpenAI are well aware of the tendency of the model to confidently misstate fact. Bing chat contains no such disclaimer, nor does the OpenAI playground that many use as a product itself.


I’m not really interested in following every service but… idk. Is that actually true? I doubt it.


I just read the complaint and the question we are trying to answer here is moot. This part of the complaint hinges on verbatim copies of their articles being intermingled with fake excerpts from fake articles making it impossible for a reader to differentiate the two.

If I wrote 10 fake headlines, I might be protected. If I copied 9 from NYT and then made one up, it's a lot harder for me to argue I didn't intend to deceive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: