Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My suspicion is that we're collectively becoming accustomed to ChatGPT failures. These failures cause problems, and become more annoying with time. The same thing happened with voice assistants.

That being said, the safety filters have definitively changed in OpenAI. ChatGPT is definitely more prone to reminding me that it is an LLM, and it refuses to participate in pretend play which it perceives as violating its safety filters. As a trivial example, ChatGPT is less willing to generate test cases for security vulnerabilities now - or engage in speculative mathematical discussions. Instead it will simply state that it is an advanced LLM blah blah blah.



The filters really have changed.

I started using it relatively late, but earlier in May, you could have given it a DOI link, and it would have summarized it for you. Now, it argues that it's not a database and that it can only summarize it if you provide the full text. However, if you ask for it with the title of the paper, it will provide you with a summary.

You could have also asked it to search patents on some topic, and it would have given you a list of links. Now, it provides instructions on how to find it yourself.


yes! I was using GPT-4 as a citation engine for a bit by pasting in text and requesting related citations. The accuracy rate of 3/4 was good enough that it was still saving me hours reading irrelevant material, particularly as validating the non-existence of 25% of citations was a trivial activity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: