Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AlphaEvolve demonstrates that Google can build a system which can be trained to do very challenging intelligent tasks (e.g. research-level math).

Isn't it just an optimization problem from this point? E.g. now training take a lot of hardware and time. If they make it so efficient that training can happen in matter of minutes and cost only few dollars, won't it satisfy your criterion?

I'm not saying AlphaEvolve is "AGI", but it looks odd to deny it's a step towards AGI.



I think most people would agree that AlphaEvolve is not AGI, but any AGI system must be a bit like AlphaEvolve, in the sense that it must be able to iteratively interact with an external system towards some sort of goal stated both abstractly and using some metrics.

I like to think that the fundamental difference between AlphaEvolve and your typical genetic / optimization algorithms is the ability to work with the context of its goal in an abstract manner instead of just the derivatives of the cost function against the inputs, thus being able to tackle problems with mind-boggling dimensionality.


The "context window" seems to be a fundamental blocker preventing LLMs from replacing a white collar worker without some fundamental break through to solve it.


It's to early to declare something "fundamental blocker" while there's so much ongoing research.


There's been 'ongoing research' since the 60s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: