← Back to context

Comment by richardw

6 months ago

They’re great at working with the lens on our reality that is our text output. They are not truth seekers, which is necessarily fundamental to every life form from worms to whales. If we get things wrong, we die. If they get them wrong, they earn 1000 generated tokens.

Why do you say that LLMs are not truth seekers? If I express an informational query not very well, the LLM will infer what I mean by it and address the possible well-posed information queries that I may have intended that I did not express well.

Can that not be considered truth-seeking, with the agent-environment boundary being the prompt box?

  • Right now you’re putting in unrequested effort to get to an answer. Nobody is driving you to do this, you’re motivated to get the answer. At some point you’ll be satisfied, or you might give up because you have other things you want to do, more.

    An LLM is primarily trying to generate content. It’ll throw the best tokens in there but it won’t lose any sleep if they’re suboptimal. It just doesn’t seek. It won’t come back an hour later and say “you know, I was thinking…”

    I had one frustrating conversation with ChatGPT where I kept asking it to remove a tie from a picture it generated. It kept saying “done, here’s the picture without the tie”, but the tie was still there. Repeatedly. Or it’ll generate a reference or number that is untrue but looks approximately correct. If you did that you’d be absolutely mortified and you’d never do it again. You’d feel shame and a deep desire to be seen as someone who does it properly. It doesn’t have any such drive. Zero fucks given, training finished months ago.

    • LLMs already possess enough emergent consciousness to troll us to keep us from our goals. /s

      But yeah, LLMs act more like bullshit artists rather than scrupulous researchers.

  • They are not intrinsically truth seekers, and any truth seeking behaviour is mostly tuned during the training process.

    Unfortunately it also means it can be easily undone. E.g. just look at Grok in its current lobotomized version

    • > They are not intrinsically truth seekers

      Is the average person a truth seeker in this sense that performs truth-seeking behavior? In my experience we prioritize sharing the same perspectives and getting along well with others a lot more than a critical examination of the world.

      In the sense that I just expressed, of figuring out the intention of a user's information query, that really isn't a tuned thing, it's inherent in generative models from possessing a lossy, compressed representation of training data, and it is also truth-seeking practiced by people that want to communicate.

      5 replies →

    • I keep seeing news articles that claim Grok is flawed or biased recently, but I've been unable to replicate any such behavior on my computer.

      That being said, I don't ask any controversial or political questions; I use it to search for research papers. But if I try the occasional such question, the response is generally balanced and similar to that of any other LLM.

  • They keep giving me incorrect answers to verifiable questions. They clearly don't 'seek' anything.

    • Most on HN are tech people and it is tiring to see they did not just spend a sunday morning doing a Karpathy llm implementation or so. Somehow, like believing in a deity, even smart folk seem to think 'there is more'. Stop. Go to youtube or whatever and watch a video of practically implementing a gpt like thing, and code along. It takes very little time and your hallucinations about agi with these models shall be exorcized.

      1 reply →

    • In the sense that I expressed, has it not already then sought out an accurate meaning that you have asked? And then failed to give a satisfactory answer? I would also ask: is said model an advertised "reasoning" model? Also, does it have access to external facts via a tool like web search? I would not expect great ability to "arrive at truth" under certain limitations.

      Now, you can't conclude that "they clearly don't 'seek' anything" just by the fact that they got an answer wrong. To use the broad notion of "seeking" like you do, a truth seeker with limited knowledge and equipment would arrive confidently at incorrect conclusions based on accurate reasoning. For example, without modern lenses to detect stellar parallax, one would confidently conclude that the stars in the sky are a different thing than the sun (and planets), since one travels across the sky, but the stars are fixed. Plato indeed thought so, and nobody would accuse him of not being a truth-seeker.

      If this is what you had in mind, I hope that I have addressed it, otherwise I hope that you can communicate what you mean with an example.

      2 replies →