cross-posted from: https://lemmy.world/post/15864003
You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)
Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”
If it’s an unresolved problem, then they should stop offering it.
Not that I think highly of Google these days, but they used to be proud of offering accurate results. If this thing is not accurate, then continuing to offer it for the sake of offering it is just sheer negligence.
You know someone is going to die because of it. Imagine if AI tells someone with a peanut allergy that they can eat peanuts if they soak them in honey first, or something.
oh wow gee whiz bud it sure does sound a lot as though you should pretty please perhaps consider to maybe possibly
STOP FUCKIN DOIN THAT SHIT
Caveat lector: I didn’t test the AI from Google Search yet, due to country restrictions. However I played quite a bit with Gemini, and odds are that that AI and Gemini are either the same model or variations of each other.
Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations”
At least a fair chunk of the crap being outputted is not caused by hallucinations, but by feeding garbage like this to Google’s AI.
However, Pichai can’t be honest, right? If he was, he would be unwittingly emphasising that your content was fed into Google’s AI, without your consent, to generate those answers. Instead he’s dressing it as the AI malfunctioning aka “hallucinating”.
But Pichai seems to downplay the errors.
If the output of that AI is as unreliable as Gemini’s, it’s less like “there are still times it’s going to get it wrong”, and more “statements being true or false depend on a coin toss”.
That is not a reliable source of information. And yet they made it the default search experience, for at least some people (thankfully not me… yet).
Pichai is a liar and a sorry excuse of a human being.
Really interesting to see just how fast this hype curve might pass. From “they’re taking our jobs” to “what are they useful for again” pretty quickly it seems.
Still, I personally would encourage everyone to keep up vigilance against the darker capitalistic implications that this “episode” has confronted us with.
Perhaps AGI isn’t around the corner … but this whole AI thing has been moving relatively quickly. Many may have forgotten, but AlphaGO beating the world champion was kinda shocking at the time and basically no one saw it coming (the player, then no 1 in the world, AFAIU, retired from Go afterward). And this latest leap into bigger models for image and text generation is still impressive once you stop expecting omniscient digital overlords (which has been a creepy as fuck inclination amongst so many people).
It’s been 7-8 years since AlphaGo. In 7-8 years, we could be looking down the barrel of more advanced AI tools without any cultural or political development around what capitalism is liable to do with such things.