Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For better or for worse, it is probably time to get accustomed to this. Early AI operated with symbols, and it has been clear for decades that that was a dead-end. Contemporary AI is stochastic, and it works a lot better for many applications than previous attempts, even considering the inherent uncertainty and errors.

Note that at the level of quantum physics one might not be able to trust CPU instructions to be faultless. It is all about getting the right error margins.



Accustomed- to tech demos that never lead to real-world capabilities? That's what I'm pointing out in my comment.

Btw, computers are symbol manipulation machines and in general that's how we understand computation: manipulating symbols according to a set of rules; like Turing machines. Stochastic algorithms also ultimately work that way, and they will continue to until we can run all the modern AI stuff on, say, analog computers.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: