← Back to context

Comment by AdieuToLogic

6 days ago

> I had a PM at my company (with an engineering background) post AI generated slop in a ticket this week. It was very frustrating.

This is likely because LLM's solve for document creation which "best" match the prompt, via statistical consensus based on their training data-set.

> We asked them: "Where is xyz code". It didn't exist, it was a hallucination. We asked them: "Did you validated abc use cases?" no they did not.

So many people mistake the certainty implicit in commercial LLM responses as correctness, largely due to how people typically interpret similar content made by actual people when the latter's position supports the former's. It's a confluence of Argument from authority[0] and Subjective validation[1].

0 - https://en.wikipedia.org/wiki/Argument_from_authority

1 - https://en.wikipedia.org/wiki/Subjective_validation