← Back to context

Comment by glemion43

1 day ago

I'm carrying a thought around for the last few weeks:

A LLM is a transformer. It transforms a prompt into a result.

Or a human idea into a concrete java implementation.

Currently I'm exploring what unexpected or curious transformations LLMs are capable of but haven't found much yet.

At least I myself was surprised that an LLM can transform a description of something into an IMG by transforming it into a SVG.

Format conversions (text → code, description → SVG) are the transformations most reach for first. To me the interesting ones are cognitive: your vague sense → something concrete you can react to → refined understanding. The LLM gives you an artifact to recognize against. That recognition ("yes, more of that" or "no, not quite") is where understanding actually shifts. Each cycle sharpens what you're looking for, a bit like a flywheel, each feeds into the next one.

  • That's true, but it can be a trap. I recommend always generating a few alternatives to avoid our bias toward the first generation. When we don't do that we are led rather than leading.