Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Both Intellect-2 and zero data reasoning work on LLMs ("Zero data reasoning" is quite a misleading name of a method. It's not very ground-breaking.) If you wanna see a major leap in LLMS, you should check out what InceptionLabs did recently to speed up inference by 16x using a diffusion model. (https://www.inceptionlabs.ai/)

Our algorithms for time-series reinforcement learning are abysmal compared to inference models. Despite the explosion of the AI field, robotics and self-driving are stuck without much progress.

I think this method has potential, but someone else needs to boil it down a bit and change the terminology because, despite the effort, this is not an easily digested article.

We're also nowhere close to getting these models to behave properly. The larger the model we make, the more likely it is to find loopholes in our reward functions. This holds us back from useful AI in a lot of domains.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: