← Back to context

Comment by mlsu

16 hours ago

It’s because the models response is conditioned on the prompt. They are as intelligent as the person using them

In some sense it’s a lot like a google search. There’s this big box of knowledge and you are choosing tokens to pluck out of it. The quality of the tokens depends on how intelligent you are.

Don’t forget, it also depends on the complexity of the work and the experiences of the operator.

The less complex the work and the less experienced the operator means more perceived “wow” factor :)

There’s definitely an aspect of how you use it though. In my work it’s mostly been chaining to reduce non-determinism.

The irony here is that even if one is extracting legitimate value from LLMs because they are that much smarter than their peers, the process of using LLMs to perform all of their skilled labor makes them less intelligent.