They do this already, but the problem is it takes me more time to verify if what they're saying is correct than to just use a search engine. All the LLMs constantly make stuff up & have extremely low precision & recall of information
disagree - i actually think all the problems the author lays out about Deep Research apply just as well to GPT4o / o3-mini-whatever. These things just are absolutely terrible at precision & recall of information
I think Deep Research shows that these things can be very good at precision and recall of information if you give them access to the right tools... but that's not enough, because of source quality. A model that has great precision and recall but uses flawed reports from Statista and Statcounter is still going to give you bad information.
Deep Research doesn’t give the numbers that are in statcounter and statista. It’s choosing the wrong sources, but it’s also failing to represent them accurately.
Wow, that's really surprising. My experience with much simpler RAG workflows is that once you stick a number in the context the LLMs can reliably parrot that number back out again later on.
Presumably Deep Research has a bunch of weird multi-LLM-agent things going on, maybe there's something about their architecture that makes it more likely for mistakes like that to creep in?
Have a look at the previous essay. I couldn't get ChatGPT 4o to give me a number in a PDF correctly even when I gave it the PDF, the page number, and the row and column.
ChatGPT treats a PDF upload as a data extraction problem, where it first pulls out all of the embedded textual content on the PDF and feeds that into the model.
This fails for PDFs that contain images of scanned documents, since ChatGPT isn't tapping its vision abilities to extract that information.
Claude (and Gemini) both apply their vision capabilities to PDF content, so they can "see" the data.
So my hunch is that ChatGPT couldn't extract useful information from the PDF you provided and instead fell back on whatever was in its training data, effectively hallucinating a response and pretending it came from the document.
That's a huge failure on OpenAI's behalf, but it's not illustrative of models being unable to interpret documents: it's illustrative of OpenAI's ChatGPT PDF feature being unable to extract non-textual image content (and then hallucinating on top of that inability).
Interesting, thanks.
I think the higher level problem is that 1: I have no way to know this failure mode when using the product and 2: I don't really know if I can rely on Claude to get this right every single time either, or what else it would fail at instead.
Yeah, completely understand that. I talked about this problem on stage as an illustration of how infuriatingly difficult these tools are to use because of the vast number of weird undocumented edge cases like this.
This is an unfortunate example though because it undermines one of the few ways in which I've grown to genuinely trust these models: I'm confident that if the model is top tier it will reliably answer questions about information I've directly fed into the context.
[... unless it's GPT-4o and the content was scanned images bundled in a PDF!]
It's also why I really care that I can control the context and see what's in it - systems that hide the context from me (most RAG systems, search assistants etc) leave me unable to confidently tell what's been fed in, which makes them even harder for me to trust.
I don't think this is a good take.
Discovery & science are inherently meaningful even if the applications are not immediately felt.
Nuclear magnetic resonance (NMR) was discovered in 1938, but there was no obvious applicability of it to everyday life. In 1971, 33 years later, Paul Lauterbur used it to develop the first MRI
I dont think they're saying its up to tech companies to decide what has value, more that the development of new technology itself ends up deciding for the rest of the world how things are valued.
It's been this way for 10,000 years since the invention of the wheel. New inventions change how things are valued by making it easier for people do more work with less time.
This sounds compelling but where i always get stuck is on trust of what the LLM / agent spits back out. Every time I've tried to use it for one of the above use cases you mentioned and then actually dug into the sources it may or may not mention, it's almost always highly imprecise, missing really important details, or straight up completely lying or hallucinating.
how do you get around this issue?
Granted on (3), you can just verify yourself by running the code, so trust/accuracy isn't as much an issue here but still annoying when things don't work.
Frame your question in human terms. LLM -> employee, hallucination -> false belief, etc. Same hiring problems. Same solutions.
You have a problem. The candidate must reliably solve it. What are their skills, general aptitudes, and observed reliability for this problem? Set them up to succeed, but move on if you distrust them to meet the role’s responsibility. We are all flawed, and that’s the nature of uncertainty when working with others.
Past that, there’s little situational advice that one can give about a general intelligence. If you want specific advice, give your specific attempt at a solution!
Are you saying it would have been a good thing for your wife's parents not to reproduce? Where would that leave her and you?
(noted that childhood you described sounds awful, i agree)
While my wife and I love each other, yes, it would have been better had they not had children -- for her sake, and this is her own feeling. Her trauma from childhood and young adulthood continues to affect her deeply and daily even now, decades later, in manifold ways, from complicated health issues to self-efficacy beliefs to frequent nightmares and constant fear about the future. When your own parent refuses to give you food, faith that everything will work out in the end can be hard to cultivate.
Personally, selfishly, the thought of my life without her is depressing, absolutely. But I can love her and yet -- or more precisely, "and so," because it's out of empathy that I feel this way -- I can understand and support her desire never to have existed.
That's a philosophical question, but I would say probably better off. If that wouldn't have taken place, it means a lot of abuse around the world wouldn't either.
this is not a strong counter argument. Costs will come down.
When there is insanely high demand for a product (like there is here) and the thing makes people more productive, costs always come down due to pure incentives to make it cheaper.
This happened with electricity, the car, air travel, solar power, etc etc
my suspicion is that FTL travel wouldn't work with this drive because of what the wikipedia article states here:
"Another possible issue is that, although the Alcubierre metric is consistent with Einstein's equations, general relativity does not incorporate quantum mechanics. Some physicists have presented arguments to suggest that a theory of quantum gravity (which would incorporate both theories) would eliminate those solutions in general relativity that allow for backward time travel (see the chronology protection conjecture) and thus make the Alcubierre drive invalid."
My fun layman conjecture based on absolutely no credible knowledge of this area beyond reading articles like this is:
-- the universe has a forward direction of time & causality that is as unbreakable as exiting from a black hole is
-- in a black hole space & time are so warped they sort of switch roles - any direction you try to move in X,Y,Z only brings you closer to the singularity. In much the same way I think the outside universe operates in Time - no matter what you do, you move forward in time.
-- My somewhat unpleasant belief here is the reason you and everything in the universe can only move forward in Time is because of the Block universe[1] concept: all of X,Y,Z,T are already set & predetermined, so it makes no sense to actually try to change your path through it.
-- All of these things put together invalidate the concept of FTL travel, because FTL travel (even with a clever trick like warping spacetime) would violate causality & allow travelers to go back in time