Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs don't think the way humans speak. LLMs process sequences of high-dimensional vectors.


Yeah that was very hand wavy on my part. What I meant to say is that LLMs encode the relationships between words. The idea being that the relationship between words is a good enough representation of the relationship between the things that the words represent.

I am conjecturing

1. that solely relying on written artifacts by produced by humans has some upper bound on the amount of knowledge that can be represented.

2. that language is an inefficient representation of human knowledge. It’s redundant and contains inaccuracies. Using written artifacts is not the shortest path to learning.

For example, take mathematics. It’s not sufficient to read a ton of math literature to effectively learn math. There’s a component of discovery that comes from e.g attempting to write a proof that can’t be replaced by reading all of the proofs that already exist.

Anyway I would take all this with a giant grain of salt.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: