← Back to context

Comment by simianwords

5 hours ago

If it were not "just a statistical next token machine", how different would it behave?

Can you find an example and test it out?

Wait, you're asking to find and produce a example of a feasible and better alternative to LLMs when they are the current forefront of AI technology?

Anyway, just to play along, if it weren't just a statistical next token machine, the same question would have always the same answer and not be affected by a "temperature" value.

  • Thats also how humans behave.. I don't see how non determinism tells me anything.

    My question was a bit different: if were not just a statistical next token predictor would you expect it to answer hard questions? Or something like that. What's the threshold of questions you want it to answer accurately.

    • Well, large models are (kinda) non-deterministic in two ways. The first is you actually provide many of them with a seed, which is easy to manage--just use the same seed for the same result. The second part is the "you actually have very little control over the 'neural pathways' the model will use to respond to the prompt". This is the baffling part, like you'll prompt a model to generate a green plant, and it works. You prompt it to generate a purple plant, and it generates an abstract demon dog with too many teeth.

      Anyway, neither of these things describes human non-determinism. You can't reuse the seed you used with me yesterday to get the exact same conversation, and I don't behave wildly unpredictably given conceptually very similar input.