Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Research seems to suggest we need exponential training data volume increases to see meaningful performance gains: https://arxiv.org/abs/2404.04125

Personally I think we've already hit a ceiling.



We have pretty much infinite training data available on YouTube. We can scale by many orders of magnitude before we run out of data. Why do you think we hit a ceiling?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: