Comment by dmos62
8 hours ago
I consider the "perfect fortgetfulness" of LLMs a great feature, because I can then precisely select what the context is for a given task. Context is additive, so once something's in it, it's doing something: most I could do is try to counteract it, which is like playing jailbreak.
Then again, this might be just me. When there's a task to be done, even without an LLM my thought process is about selecting the relevant parts of my context for solving it. What is relevant? What starting point has the best odds of being good? That translates naturally to tasking an LLM.
Let's say I have a spec I'm working on. It's based off of a requirements document. If I want to think about the spec in isolation (let's say I want to ask the LLM what requirements are actually being fulfilled by the spec), I can just pass the spec, without passing the requirements. Then I'll compare the response against the actual requirements.
At the end of the day, I guess I hate the automagicness of a silent context injection. Like I said, it also negates the perfect forgetfulness of LLMs.
No comments yet
Contribute on Hacker News ↗