Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

All of the 25 comments so far are missing the point.

Surprisingly, YC's greater opinion is still "ChatGPT is a useless stochastic parrot." probably because they are too cheap to fork over $20 to try GPT4.

Yes, GPT3.5 is mostly a toy. Yes, it hallucinates a non-trivial amount of time. But IMO, GPT4 is in a completely different class, and has almost entirely replaced search for me.

If OpenAI really wants ChatGPT to challenge search, it has to be free and accessible without requiring a sign up.

I very rarely use any search engine now. Really I only use search when I'm looking for reddit threads or a specific place in Google Maps.

All of my other queries: how things work, history, how to setup my wacom, unit conversions, calculating mortgage, explain stdlib with examples, and so on... All of that goes to ChatGPT. It's a million times faster and more efficient than scrolling through endless SEO blog spam and overly verbose Medium articles.

This update makes ChatGPT3.5 available without sign up, not ChatGPT 4. But if/when ChatGPT4 becomes available without sign up, I have no doubt the rest of the population will experience the same lightbulb moment I did, and it will replace search for them as well.



> probably because they are to cheap to fork over $20 to try GPT4.

Or because GPT 3.5 was hyped to the skies by all and sundry, and those who were convinced enough to use it still found it lacking. Many like you are now saying "oh yeah GPT 3.5 was awful, but this really is the future".

Not everybody wants to have the quality of their work dependent on the whims of an OAI product manager. If GPT-4 is as good as claimed, then it will find its way into my workflow. For now, AI claims are treated as fiction unless accompanied with a JSFiddle-style code example...too much snake oil in the water to do otherwise.


Bard already integrates with Google Search. These companies aren't competing to be the best; just the most good enough in existing form factors.


> mostly ... but

I have been busy on different matters:

in absence (I presume) of a model of their model of reasoning,

has some metrics, some measurement, be produced to understand the deep reliability (e.g. absence of hallucination, logical consistency etc.) of LLMs?


It is bizarre to me that your comment was downvoted, it is incredibly insightful. Google is very worried about this right now and they should be.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: