← Back to context

Comment by vachina

9 hours ago

An LLM is like a jackhammer, it works very well when you hold it tightly. If you let it loose it will sort of work for a while then it starts destroying everything around it.

Not sure if this is a good analogy. You're supposed to use a jackhammer with a very light grip.

  • I think it actually holds truer to it working better with a _lighter grip_. LLMs tend to conclude the wrong thing if you over-control them (more context is what makes them less and less reliable over time, as in those demos), and trying to force a model to execute A+B+C=D in sequence is way harder than giving it a bunch of tools to arrive to conclusion D