Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How many companies train on data that contains 'i don't know' responses. Have you ever talked with a toddler / young child? You need to explicitly teach children to not bull shit. At least I needed to teach mine.


Never mind toddlers, have you ever hired people? A far smaller proportion of professional adults will say “I don’t know” than a lot of people here seem to believe.


I never thought about this but I have experienced this with my children.


> train on data that contains 'i don't know' responses

The "dunno" must not be hardcoded in the data, it must be an output of judgement.


Judgement is what we call a system trained on good data.


No I call judgement a logical process of assessment.

You have an amount of material that speaks of the endeavours in some sport of some "Michael Jordan", the logic in the system decides that if a "Michael Jordan" in context can be construed to be "that" "Michael Jordan" then there will be sound probabilities he is a sportsman; you have very little material about a "John R. Brickabracker", the logic in the system decides that the material is insufficient to take a good guess.


AI is not a toddler. It's not human. It fails in ways that are not well understood and sometimes in an unpredictable manner.


Actually it fails exactly like I would expect something trained purely in knowledge and not in morals.


Then I expect your personal fortunes are tied up in hyping the "generative AI are just like people!" meme. Your comment is wholly detached from the reality of using LLMs. I do not expect we'll be able to meet eye-to-eye on the topic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: