Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I don’t really understand what you’re testing for?

For this hypothesis: The intelligence illusion is in the mind of the user and not in the LLM itself.

And yes, the notion was provided by the training data. It indeed had to learn that notion from the data, rather than parrot memorized lists or excerpts from the training set, because the problem space is too vast and the training set too small to brute force it.

The output lists were sorted in ascending order, the same way that I generated them for the training data. The sortedness is directly verifiable without me reading between the lines to infer something that isn't really there.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: