* ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.*
These things are bullshit generators that write plausible-sounding but incorrect things.
Which is why they can be used as a prompt but nothing to be relied upon.
I am worried for when swarms of these bots take over and overwhelm our current “sources of truth” online with utter bullshit, thay amasses more likes and retweets than any set of humans, at scale.
That does seem like a plausible nightmare scenario. That they are somehow set loose generating content, are rewarded for virality, and completely drown out content generated by humans. At that point they would exhibit accelerating non-sense. And this would be disruptive for all of society. Maybe.
These things are bullshit generators that write plausible-sounding but incorrect things.
Which is why they can be used as a prompt but nothing to be relied upon.
I am worried for when swarms of these bots take over and overwhelm our current “sources of truth” online with utter bullshit, thay amasses more likes and retweets than any set of humans, at scale.