Comment by Jowsey
9 days ago
This reply is entirely AI generated. You guys are trying to find reason in a hallucination. It's unfortunately impossible to put into words what the "LLM smell" is at this point, but I trust someone else who spends a lot of time reading LLM output can back me up on this.
I've seen these agent-written fake anecdotes on Twitter, Reddit, and now here, all with the exact same formatting. They pretend to be real people with real anecdotes, but they're all completely made up.
The two day old account is an obvious hint but I got to be honest, the content didn't look suspicious on first read. I know you touched on it above, but what do you think triggered your AI generated thought ?
Some people don’t farm social credit. I usually drop my account after it gets too high because the evidence of hipsters approving of my words shames me.
it's this part:
> latency matters more than raw accuracy – think industrial inspection
it (rightfully) raises red flags in anyone when you hear someone confidently claim raw accuracy is _not_ important in things like _inspection_
They didn't say it wasn't important they said latency was more important, and they're right for many use cases. Once you can't run at realtime where you're operating, you need to move to batching or offloading the work to a pool of workers and handling more async issues. You can no longer have something that shunts the component off to another track where your camera is, you need to have the camera somewhere else then 40s later pull it out of another location. You need good networking so you can fire off images to get processed elsewhere. That's also a bunch more systems to maintain.
These things aren't impossible of course but it's additional management over "place the device here".
Here's how you know that accuracy isn't the be all and end all of the discussion - we already deploy systems with less than human accuracy to monitor things, and when we use humans we very rarely inspect every single item. So there must be a tradeoff we're happy making in lots of industries.
Even if you're focussed on not missing anything, lower accuracy that comes at the cost of more false positives can be massively useful as you can then do a two step process (even with humans as the second step if you need). The goal of the first step is to ignore the 99% of totally fine items so you spend the costly process on just 1% of the items.
1 reply →
That is definitely the right part. The dash isn't a symbol on a normal keyboard, and "think blah blah blah" occurs frequently in LLM chat sessions (for me, at least). I suspect those easy-to-spot indicators won't be around forever, which will make AI posts much more difficult to spot. But I think the thinly veiled advertisement that follows in that clause will be the bigger tell in future models. If we feel like we're being marketed at, we can almost guarantee there isn't a human on the other end. This isn't the internet I signed up for.
they might be referring to using a quantised version which gives them high performance and the accuracy drop is less important
Their account only existing 2d lends you a lot of credibility..
That’s wild. And scary.
What's scary is that it's still the highest upvoted comment on this submission, although it obviously doesn't make sense.
Hope HN has tooling ready to handle this ongoing onslaught of manipulation...
AI will make humans more AI-like, and milestones will be celebrated when it more perfectly simulates degraded humanity
1 reply →
this right here I think we all need to think on what is happening right now. Dead internet theory might be plausible. What goal would an AI writing crap responses on reddit/hacker news/what not have to even need to comment?
1 reply →
Other comments from that account feel very similar. Eery.