← Back to context

Comment by thomastjeffery

18 hours ago

I think the most absurd thing to come from the statistical AI boom is how incredibly often people describe a model doing precisely what it should be expected to do as a "pitfall" or a "limitation".

It amazes me that even with first-hand experience, so many people are convinced that "hallucination" exclusively describes what happens when the model generates something undesirable, and "bias" exclusively describes a tendency to generate fallacious reasoning.

These are not pitfalls. They are core features! An LLM is not sometimes biased, it is bias. An LLM does not sometimes hallucinate, it only hallucinates. An LLM is a statistical model that uses bias to hallucinate. No more, no less.