I recently tried to get Gemini to collect fresh news and show them to me, and instead of using search it hallucinated everything wholesale, titles, abstracts and links. Not just once, multiple times. I am kind of afraid of using Gemini now for anything related to web search.
Here is a sample:
> [1] Google DeepMind and Harvard researchers propose a new method for testing the ‘theory of mind’ of LLMs - Researchers have introduced a novel framework for evaluating the "theory of mind" capabilities in large language models. Rather than relying on traditional false-belief tasks, this new method assesses an LLM’s ability to infer the mental states of other agents (including other LLMs) within complex social scenarios. It provides a more nuanced benchmark for understanding if these systems are merely mimicking theory of mind through pattern recognition or developing a more robust, generalizable model of other minds. This directly provides material for the construct_metaphysics position by offering a new empirical tool to stress-test the computational foundations of consciousness-related phenomena.
About 75% of the time I look at the Gemini answer, it's wrong. Maybe 80%. Sometimes it's a little wrong, like giving the correct answer for another product/item, or the times that a business is open wrong. There's a local business I took my wife to, Gemini told her it's open monday to friday, but it's open tuesday to saturday, so we showed up on a monday to see them closed. But sometimes it's insanely wrong making up dozens of wrong "facts". My wife started looked more carefully now. My boss will even say "Gemini says X so it's probably Y" these days.
What version of Gemini were you using? i.e. were you calling it locally via the API or thru their Gemini or AI Studio web apps?
Not every LLM app has access to web / news search capabilities turned on by default. This makes a huge difference in what kind of results you should expect. Of course, the AI should be aware that it doesn't have access to web / news search, and it should tell you as much rather than hallucinating fake links. If access to web search was turned on, and it still didn't properly search the web for you, that's a problem as well.
This isn't something you can work on your own either, as getting any kind of news feed via API (even for local personal use) is almost prohibitively expensive unless you're willing to scrape.
I'm not able to reproduce something like this.
What prompt were you using? Asking it for today's top news gets it to use Google search and provide valid links.
I wanted to use the agentic powers of the model to dig for specific kinds of news, and use iterative search as well. I think when LLMs use tools correctly this kind of search is more powerful than simple web search. It also has better semantic capabilities, so in a way I wanted to make my own LLM powered news feed.
Do you have an in-depth understanding of how those "agentic powers" are implemented? If not, you should probably research it yourself. Understanding what's underneath the buzzwords will save you some disappointment in the future.
I think I do, I have been in ML for 12 years and followed transformers since their invention. Also been using LLM daily since they appeared, personally.
They're selling it as having this ability, so it really doesn't matter what people want. We should be holding these companies to account for selling software that doesn't live up to what they say it does.
The problem is that 90% of people will not do that once they've satisfied their confirmation bias. Hard to say if that's going to be better or worse than the current echo chamber effects of the Internet. I'm still holding out for better, but certainly this is shaking that assumption
So this probably is valid. However, so is Gell-Mann amnesia and both phenomena happen a lot. There are topics where one side is the group of people who have attempted to understand a problem and the other side are people who either do not or won't due to emotions. Acting as if it is all confirmation bias feels good but probably isn't the best way to look at the media.
Huh? All the classic search engines required you to click through the results and read them. There's nothing wrong with that. What's different is that LLMs will give you a summary that might make you think you can get away with not clicking through anymore. This is a mistake. But that doesn't mean that the search itself is bad. I've had plenty of cases where an LLM gave me incorrect summaries of search results, and plenty of cases where it found stuff I had a hard time finding on my own because it was better at figuring out what to search for.
Here is a sample:
> [1] Google DeepMind and Harvard researchers propose a new method for testing the ‘theory of mind’ of LLMs - Researchers have introduced a novel framework for evaluating the "theory of mind" capabilities in large language models. Rather than relying on traditional false-belief tasks, this new method assesses an LLM’s ability to infer the mental states of other agents (including other LLMs) within complex social scenarios. It provides a more nuanced benchmark for understanding if these systems are merely mimicking theory of mind through pattern recognition or developing a more robust, generalizable model of other minds. This directly provides material for the construct_metaphysics position by offering a new empirical tool to stress-test the computational foundations of consciousness-related phenomena.
> https://venturebeat.com/ai/google-deepmind-and-harvard-resea...
The link does not work, the title is not found in Google Search either.