Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Because for things like the Putnam questions, we are trying to get the performance of a smart human. Are LLMs just stochastic parrots or are they capable of drawing new, meaningful inferences? We keep getting more and more evidence of the latter, but things like this throw that into question.


Okay, but you just invented your own bar of "smart human" to be the universal bar (I don't share that opinion).

Also, lots of smart humans can't do the freaking Putnam, it doesn't make them stupid. It makes them non-experts.


It is perfectly possible for the first AGI to be stupid. A moron. In fact, I'd bet that's fairly likely.


I would agree if we weren't starting with LLMs for a baseline. The first AGI will know at least as much as LLMs, IMO, and that's already not-stupid. Especially once they can separate out the truth in their training.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: