Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As always, understanding of nuanced issues will pool at the extremes.

It is both true that:

- Adoption of DL allows corps to launder or inadvertently adopt unethical practices that exacerbate inequalities.

- We have not created AGI, LLMs are not AGI, stable diffusion is not AGI and none of these fields of research are indication we are closer to AGI. This is like thinking Clever Hans is the portent of our future equine overlords.

A great number of people who claim that we are one minute from AGI overlord midnight are doing the old school carnival barker trick. "This snake oil is so potent! It can kill you if you use too much, so be sure to listen carefully to my instructions!"



I’m curious why you are so confident in your assertion. It seems to me that an advanced statistical model of the world is an essential component of AGI. How do you know that we aren’t a few breakthroughs away from AGI?

Some recent papers have shown significant performance improvements when these models are allowed to respond to their own outputs.

How do you know that putting an LLM in a fancy loop with access to external memory and tools isn’t AGI?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: