← Back to context

Comment by ben_w

5 days ago

Necessarily, LLM output that works isn't gibberish.

The code that LLM outputs, has worked well enough to learn from since the initial launch of ChatGPT. This even though back then you might have to repeatedly say "continue" because it would stop in the middle of writing a function.

  Necessarily, LLM output that works isn't gibberish.

Hardly. Poorly conjured up code can still work.

  • "Gibberish" code is necessary code which doesn't work. Even in the broader use of the term: https://en.wikipedia.org/wiki/Gibberish

    Especially in this context, if a mystery box solves a problem for me, I can look at the solution and learn something from that solution, c.f. how paper was inspired by watching wasps at work.

    Even the abject failures can be interesting, though I find them more helpful for forcing my writing to be easier to understand.