Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Then your definition of understanding is meaningless. If a physical system is able to accurately simulate understanding, it understands.


A human that mimics the speech of someone that does understand usually doesn't understand himself. We see that happen all the time with real humans, you have probably seen that as well.

To see if a human understands we ask them edge questions and things they probably haven't seen before, and if they fail there but just manage for common things then we know the human just faked understanding. Every LLM today fails this, so they don't understand, just like we say humans don't understand that produces the same output. These LLM has superhuman memory so their ability to mimic smart humans is much greater than a human faker, but other than that they are just like your typical human faker.


> A human that mimics the speech of someone that does understand usually doesn't understand himself.

That's not what LLMs do. They provide novel answers to questions they've never seen before, even on topics they've never heard of, that the user just made up.

> To see if a human understands we ask them edge questions

This is testing if there are flaws in their understanding. My dog understands a lot of things about the world, but he sometimes shows that he doesn't understand basic things, in ways that are completely baffling to me. Should I just throw my hands in the air and declare that dogs are incapable of understanding anything?


My definition of understanding is not meaningless, but it appears you do not understand it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: