Comment by rf15
21 days ago
So many reports like this, it's not a question of working out the kinks. Are we getting close to our very own Stop the Slop campaign?
21 days ago
So many reports like this, it's not a question of working out the kinks. Are we getting close to our very own Stop the Slop campaign?
Yeah, after daily working with AI for a decade in a domain where it _does_ work predictably and reliably (image analysis), I continue to be amazed how many of us continue to trust LLM-based text output as being useful. If any human source got their facts wrong this often, we'd surely dismiss them as a counterproductive imbecile.
Or elect them President.
I am beginning to wonder why I use it, but the idea of it is so tempting. Try to google it and get stuck because it's difficult to find, or ask and get an instant response. It's not hard to guess which one is more inviting, but it ends up being a huge time sink anyway.
HAL 9000 in 2028!
Regulation with active enforcement is the only civil way.
The whole point of regulation is for when the profit motive forces companies towards destructive ends for the majority of society. The companies are legally obligated to seek profit above all else, absent regulation.
> Regulation with active enforcement is the only civil way.
What regulation? What enforcement?
These terms are useless without details. Are we going to fine LLM providers every time their output is wrong? That’s the kind of proposition that sounds good as a passing angry comment but obviously has zero chance of becoming a real regulation.
Any country who instituted a regulation like that would see all of the LLM advancements and research instantly leave and move to other countries. People who use LLMs would sign up for VPNs and carry on with their lives.
Regulations exist to override profit motive when corporations are unable to police themselves.
Enforcement ensures accountability.
Fines don't do much in a fiat money-printing environment.
Enforcement is accountability, the kind that stakeholders pay attention to.
Something appropriate would be where if AI was used in a safety-critical or life-sustaining environment and harm or loss was caused; those who chose to use it are guilty until they prove they are innocent I think would be sufficient, not just civil but also criminal; where that person and decision must be documented ahead of time.
> Any country who instituted a regulation like that would see all of the LLM advances and research instantly leave and move to other countries.
This is fallacy. Its a spectrum, research would still occur, it would be tempered by the law and accountability, instead of the wild-west where its much more profitable to destroy everything through chaos. Chaos is quite profitable until it spread systemically and ends everything.
AI integration at a point where it can impact the operation of nuclear power plants through interference (perceptual or otherwise) is just asking for a short path to extinction.
Its quite reasonable that the needs for national security trump private business making profit in a destructive way.
6 replies →
A very simple example would be a mandatory mechanism for correcting mistakes in prebaked LLM outputs, and an ability to opt out of things like Gemini AI Overview on pages about you. Regulation isn't all or nothing, viewing it like that is reductive.
> Are we getting close to our very own Stop the Slop campaign?
I don't think so. We read about the handful of failures while there are billions of successful queries every day, in fact I think AI Overviews is sticky and here to stay.
Are we sure these billions of queries are “successful” for the actual user journey? Maybe this is particular to my circle, but as the only “tech guy” most of my friends and family know, I am regularly asked if I know how to turn off Google AI overviews because many people find them to be garbage
Why on earth are you accepting his premise that there are billions of successful requests? I just asked chatgpt about query success rate and it replied (part):
"...Semantic Errors / Hallucinations On factual queries—especially legal ones—models hallucininate roughly 58–88% of the time
A journalism‑focused study found LLM-based search tools (e.g., ChatGPT Search, Perplexity, Grok) were incorrect in 60%+ of news‑related queries
Specialized legal AI tools (e.g., Lexis+, Westlaw) still showed error rates between 17% and 34%, despite being domain‑tuned "