Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wonder if this is the same as the discussion from 35 days ago on "OpenAI Employee: GPT-4 has been static since March"

https://news.ycombinator.com/item?id=36155267



the base model may have been but not necessarily the RLHF fine-tuned layers they might have added or the shortcut they're taking during inference due to such fine tuning (or for perf optimization unrelated to fine tuning.)


In “legacy” software development, this would be the equivalent to saying “our database schema is the same” while completely ignoring all of the business logic and UI that gets placed in front of that database.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: