← Back to context

Comment by Calavar

25 days ago

Clearly the prompting works, but the I think the more interesting question is why. Even from a just-get-things-done perspective, if you understand the mechanism of how and why a prompting technique works, that's going make you more successful in iterating on that technique in the future. IMHO that attempt to understand how and why your prompt works before you iterate on it is the difference between prompt engineering and prompt alchemy.

I agree that humans have the same limitation. I don't see the inability to dynamically remove training data as an LLM-specific problem.

For an LLM conscious thought is the tokens in its context window, and subconscious thought is the training embedded in its parameters. A person thinks in a similar manner, with subconscious gut instincts modulated (more or less) by a thin veneer of consciousness.