Of course, my technical friends very quickly found the edge cases, getting it to contradict itself, etc.
OK, I'm a technical person but I asked the chatbot in the article broad questions that were difficult but not "tricky" ("what's a good race to play for a Druid in DnD", "Compare Kerouac's On The Road to his Desolation Angels" and got a reasonable summary of search plus answers that were straight-up false).
Maybe your "nontechnical" friend weren't able to notice that the thing's output of misinformation but seems like more of a problem, not less.
Also, ChatGPT in particular seems to go to pains to say it's not conscious and that's actually a good thing. These chatBots can be useful search summarizers making their limits clear (like github navigator). They're noxious if they instill a delusion of their consciousness in people and I don't think you should be so happy about fooling your friends. Every new technology has initially had cases where people could be deluded into think it was magic but those instances can't be taken as proof of that magic or as bragging rights.
You wouldn't even need the model to be trained in real time. I'd love to see OpenAI buy Wolfram Research. WolframAlpha has managed to integrate tons of external data into a natural language interface. ChatGPT already knows when to insert placeholders, such as "$XX.XX" or "[city name]" when it doesn't know a specific bit of information. Combining the two could be very powerful. You could have data that's far more current than what's possible by retraining a large model.
I didn't go into it trying to break or trick it. The only thing tricky about the questions I asked was that I knew the answer to them. I don't think it's necessary dumber than the first page of a Google search but it's certainly not more well informed than that. But it certainly seems smart, which is actually a bit problematic.
It’s actually not that different from chatting to the know-it-all type of Internet rando: they can talk to you about anything and seem knowledgeable on all of them, but go into a topic you actually know about and you realize they’re just making shit up or regurgitating myths they read somewhere. You can find that kind of user on HN.
Yeah this is my main concern about GPT-3, there's no truth-fiction slider, and it will often slip complete fabrications into the output, making it dangerous to rely on for real world information. Which is really a shame, because it does actually give great output most of the time.
I have never seen a human made website with a truth-fiction slider. The answers can be straight up false and scary, but it is no different from other publications out there.
Even with the most credible news sources, it is still up to the person reading it to sense the BS.
I never believe in natural lang to tell computer to do things with an objective of getting certain result (been skeptical since pre-2011).
It wouldn't be used to fly plane without lots of physical buttons as a fallback.
Composing rigid instructions for computer is already hard, even with precise semantics defined. Even with static typed, dynamic typed, developers will try hard to get rid of a single bug.
AI will serve as a middleware with an objective of arbitrary result.
Human
|> UI (request)
|> AI
|> UI (response)
|> Human
|> UI (request with heuristic)
|> Computer does thing
It’s a technical preview, not a finished product. If they’d tested it on every combination of Kerouac novels before release, it would probably never see the light of day :) I’m still incredibly impressed.
OK, I'm a technical person but I asked the chatbot in the article broad questions that were difficult but not "tricky" ("what's a good race to play for a Druid in DnD", "Compare Kerouac's On The Road to his Desolation Angels" and got a reasonable summary of search plus answers that were straight-up false).
Maybe your "nontechnical" friend weren't able to notice that the thing's output of misinformation but seems like more of a problem, not less.
Also, ChatGPT in particular seems to go to pains to say it's not conscious and that's actually a good thing. These chatBots can be useful search summarizers making their limits clear (like github navigator). They're noxious if they instill a delusion of their consciousness in people and I don't think you should be so happy about fooling your friends. Every new technology has initially had cases where people could be deluded into think it was magic but those instances can't be taken as proof of that magic or as bragging rights.