If anything in life can be reasoned by first principles, whether it’s physics, computer science, economics or mathematics, would an AI that is able to reason at 100% accuracy be capable of understanding our world in all its detail and derive ideas and outcomes from it, thus leading to AGI?
Reasoning is heuristic in nature, so unless an AI has a complete understanding of every situation possible it cannot be relied on for accurate reasoning. The world is too complex to present perfectly reliable reasoning, and you can't Monte Carlo simulate your way to the single authoritative truth.
Plus, LLMs are so inherently stupid that I don't think we have to worry about "AGI" for another 10-20 years. All anyone wants is their glorified markov chain anyways.
Reaction: How many quetta-ronna-yonna-zetta-exa-watts* of power were you figuring that this AGI might draw, to understand the world, at scale, from a "just solve the quantum equations" basis?
Plus, LLMs are so inherently stupid that I don't think we have to worry about "AGI" for another 10-20 years. All anyone wants is their glorified markov chain anyways.