Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While I disagree with you that it's "a huge mistake" (I think it works fine in 95% of cases), it strikes me that this sort of semantic textual substitution is a perfect task for an LLM. Why not just ask a cheap LLM to de-sensationalize any post which hits more than 50 points or so?


We saw that a few days ago, someone did that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: