Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting. I think the key to what you wrote is "poorly definined".

I find LLMs to be generally intelligent. So I feel like "we are already there" -- by some definition of AGI. At least how I think of it.

Maybe a lot of people think of AGI as "superhuman". And by that definition, we are not there -- and may not get there.

But, for me, we are already at the era of AGI.



I would call them "generally applicable". "intelligence" definitely implies leaning - and I'm not sure RAG, fine-tuning, or 6monthly updates counts - to split hairs.

Where I will say we have a massive gap, which makes the average person not consider it AGI, is in context. I can give a person my very modest codebase, and ask for a change, and they'll deliver - mostly coherently - to that style, files in the right place etc. Still to today with AI, I get inconsistent design, files in random spots, etc.


> I find LLMs to be generally intelligent. So I feel like "we are already there" -- by some definition of AGI. At least how I think of it.

I don't disagree - they are useful in many cases and exhibit human like (or better) performance in many tasks. However they cannot simply be a "drop in white collar worker" yet, they are too jagged and unreliable, don't have a real memory etc. Their economic impact is still very much limited. I think this is what many people mean when they say AGI - something with a cognitive performance so good it equals or beats humans in the real world, at their jobs - not at some benchmark.

One could ask - does it matter ? Why can't we say the current tools are great task solvers and call it AGI even if they are bad agents? It's a lengthy discussion to have but I think that ultimately yes, agentic reliability really matters.


that's the thing about language. we all kinda gotta agree on the meanings




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: