It doesn't have to be a god-like arbiter of truth, it just needs to use actual facts that it can source, not just make stuff up out of thin air, which is exactly what it does.
I asked it recently about how Fermat's Last Theorem was discussed in Star Trek (referencing the episode where Picard is talking to Riker about how it's never been solved, though in reality it was solved less than 10 years after the episode aired), and it wrote a very convincing answer that was complete BS, making up details in different episodes that didn't actually happen. It's easy to get detailed plot summaries of these episodes from Wikipedia or Memory Alpha, so I'm really curious how ChatGPT got its info, or if it just likes to make things up that sound plausible but are completely wrong. Just for starters, according to ChatGPT, Picard did not talk about Fermat's theorem; Data did!
ChatGPT is meant to mimic a human. Not be some god like arbtrar of truth. No such entity will ever exist.