Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As with many philosophical discussions, there is no point in claiming LLMs can "reason" because "reason" is not a well-defined term and you will not get everyone to agree on a singular definition.

Ask a computer scientist, continental philosopher, and anthropologist what "reason" is and they will give you extremely different answers.

If by reason we mean deductive reasoning as practiced in mathematics and inductive reasoning as practiced in the sciences, there is no evidence that LLMs do anything of the sort. There is no reason (ha) to believe that linguistic pattern matching is enough to emulate all that we call thinking in man. To claim so is to adopt an drastically narrow definition of "thinking" and to ignore the fact that we are embodied intellects, capable of knowing ourselves in a transparent possibly prelinguistic way. Unless an AI becomes embodied and can do the same, I have no faith that it will ever "think" or "reason" as humans do. It remains a really good statistical parlor trick.



https://transformer-circuits.pub/2022/in-context-learning-an...

there is a lot of evidence to suggest that they are performing induction


> Unless an AI becomes embodied and can do the same, I have no faith that it will ever "think" or "reason" as humans do. It remains a really good statistical parlor trick.

This may be true, but if it's "good enough" then why does that matter? If I can't determine if a user on Slack/Teams is an LLM that covers their tickets on time with decent code quality, then I really don't care if they know themselves in a transparent, prelinguistic fashion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: