Comment by 0x500x79
6 days ago
I had a PM at my company (with an engineering background) post AI generated slop in a ticket this week. It was very frustrating.
We asked them: "Where is xyz code". It didn't exist, it was a hallucination. We asked them: "Did you validated abc use cases?" no they did not.
So we had a PM push a narrative to executives that this feature was simple, that he could do it with AI generated code: and it didn't solve 5% of the use cases that would need to be solved in order to ship this feature.
This is the state of things right now: all talk, little results, and other non-technical people being fed the same bullshit from multiple angles.
> I had a PM at my company (with an engineering background) post AI generated slop in a ticket this week. It was very frustrating.
This is likely because LLM's solve for document creation which "best" match the prompt, via statistical consensus based on their training data-set.
> We asked them: "Where is xyz code". It didn't exist, it was a hallucination. We asked them: "Did you validated abc use cases?" no they did not.
So many people mistake the certainty implicit in commercial LLM responses as correctness, largely due to how people typically interpret similar content made by actual people when the latter's position supports the former's. It's a confluence of Argument from authority[0] and Subjective validation[1].
0 - https://en.wikipedia.org/wiki/Argument_from_authority
1 - https://en.wikipedia.org/wiki/Subjective_validation
I’ve recently had a couple people try to help me fix code issues by handing me the results of their AI prompting. 100% slop; it made absolutely no sense in the context of the problem.
I figured the issue out the old-fashioned way, but it was a little annoying that I had to waste extra time deciphering the hallucinations, and then explaining why they were hallucinations.