Comment by uludag

14 days ago

I've become utterly disillusioned at LLMs ability to answer questions which entail even a bit of subjectivity, almost to the point of uselessness. I feel like I'm treading on thin ice, trying to avoid accidentally nudging the model to a specific response. Asking truly neutral questions is a skill I didn't know existed.

If I let my guard of skepticism down for one prompt, I may be led into some self reinforced conversation that ultimately ends where I implicitly nudged it. Choice of conjunction words, sentence structure, tone, maybe even the rhythm of my question seems to force the model down a set path.

I can easily imagine how heedless users can come to some quite delusional outcomes.

LLMs don't have a subjective experience, so they can't actually give subjective opinions. Even if you are actually able to phrase your questions 100% neutrally so as not to inject your own bias into the conversation, the answers you get back aren't going to be based on any sort of coherent "opinion" the AI has, just a statistical mish-mash of training data and whatever biases got injected during post-training. Useful perhaps as a sounding board or for getting a rough approximation of what your typical internet "expert" would think about something, but certainly not something to be blindly trusted.

It's not unreasonable to conclude that humans work the same way. Our language manipulation skills might have the same flaw. Easily tipped from one confabulation to another. The subjective experience is hard to put into words, since much of our experience isn't tied to "syllable tokenization".