Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I dont know but numerous articles point out that gpt4 has passed turing test. Are you refuting their claim entirely?


Gpt-4 fails the Turing test by default because it does not claim to be human therefore it’s trivial for a judge to identify it. It would be interesting to train gpt to pass the Turing test, but I think for any reasonable judge it would still fail.


The only thing I've seen is that people given 5 minutes with either an AI or a real person guess correctly about 60% of the time. But 5 minutes is only enough time to exchange 2 or 3 messages, hardly a thorough test.

Taking a step back, this argument seems headed towards defining what exactly the "Turing test" is. A lot of debates devolve into arguments over the definition of a single word. That's okay, maybe I do have an uncommon definition about what the "Turing test" is.

Regardless of the definition of "Turing test" though, my underlying argument remains. I haven't seen any AI pass a thorough test in which it tries to imitate a human. Tests are either too short or depend on an unsuspecting person who doesn't know they are participating in the test. Maybe I'm moving the goal posts, but my personal goal, the goal I've been watching for, is for an AI that is indistinguishable from a real person in a thorough test[0] and no AI has passed such a test as far as I know.

[0]: My own definition of a thorough test is: 2 humans and 1 AI in a chat, given at least an hour to chat, and give the humans a reward if they guess correctly.


Where are these articles? I know that ChatGPT can't pass the Turing test, because in about 2 minutes it would tell you that, as an AI language learning model, it can't answer your question. Presumably that's just a function of its initial instructions, and the API version of GPT-4 behaves somewhat differently. Is the non-Chat GPT-4 capable of pretending to be a human when asked direct questions?


Given how frequently I can’t tell whether I’m talking with a human or robot, it seems certain that it has passed. There are some constraints, like what GPT is willing to discuss, how long it takes to respond, and forgetfulness in long conversations. But even those would be expected in humans to some extent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: