← Back to context

Comment by ay

6 days ago

This is a super apt analogy. Every time I decided to let LLMs “vibe-fix” non-obvious things for the sake of experiment, it spiraled into an unspeakable fubar territory, which needed to be reverted - a very similar situation to this financial collapse.

Invariably, after using the brain, the real fix was usually quite simple - but, also invariably - was hidden behind 2-3 levels of indirection in reasoning.

On the other hand, I had rather pleasant results when “pair-debugging”, my demanding to explain why or just correcting it in the places when it was about to go astray certainly had effect - in return I got some really nice spotting of “obvious” but small things I might have missed otherwise.

That said, definition of “going astray” varies - from innocently jumping into what looked like unsupported conclusions to blatantly telling me something was equal to true right after ingesting the log with the printout showing the opposite.