Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the issue is going to turn out to be that intelligence doesn't scale very well. The computational power needed to model a system has got to be in some way exponential in how complex or chaotic the system is, meaning that the effectiveness of intelligence is intrinsically constrained to simple and orderly systems. It's fairly telling that the most effective way to design robust technology is to eliminate as many factors of variation as possible. That might be the only modality where intelligence actually works well, super or not.


What does "scale well" mean here? LLMs right now aren't intelligent so we're not scaling from that point on.

If we had a very inefficient, power hungry machine that was 1:1 as intelligent as a human being but could scale it very inefficiently to be 100:1 a human being it might still be worth it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: