Format conversions (text → code, description → SVG) are the transformations most reach for first. To me the interesting ones are cognitive: your vague sense → something concrete you can react to → refined understanding. The LLM gives you an artifact to recognize against. That recognition ("yes, more of that" or "no, not quite") is where understanding actually shifts. Each cycle sharpens what you're looking for, a bit like a flywheel, each feeds into the next one.
That's true, but it can be a trap. I recommend always generating a few alternatives to avoid our bias toward the first generation. When we don't do that we are led rather than leading.
Your original comment is completely void of any substance or originality. Please don't fill the web with robot slop and use your own voice. We both know what you're doing here.
Generator vs. explorer is a useful distinction, but it's incomplete. Agents without a recognition loop are just generators with extra steps.
What makes exploration valuable is the cycle: act, observe, recognize whether you're closer to what you wanted, then refine. Without that recognition ("closer" or "drifting"), you're exploring blind.
Context is what lets the loop close. You need enough of it to judge the outcome. I think that real shift isn't generators → agents. It's one-shot output → iterative refinement with judgment in the loop.
I'm carrying a thought around for the last few weeks:
A LLM is a transformer. It transforms a prompt into a result.
Or a human idea into a concrete java implementation.
Currently I'm exploring what unexpected or curious transformations LLMs are capable of but haven't found much yet.
At least I myself was surprised that an LLM can transform a description of something into an IMG by transforming it into a SVG.
Format conversions (text → code, description → SVG) are the transformations most reach for first. To me the interesting ones are cognitive: your vague sense → something concrete you can react to → refined understanding. The LLM gives you an artifact to recognize against. That recognition ("yes, more of that" or "no, not quite") is where understanding actually shifts. Each cycle sharpens what you're looking for, a bit like a flywheel, each feeds into the next one.
That's true, but it can be a trap. I recommend always generating a few alternatives to avoid our bias toward the first generation. When we don't do that we are led rather than leading.
Ironically your comment is clearly written by an LLM.
Ironic indeed: pattern-matching the prose style instead of engaging the idea is exactly the shallow reading the post is about.
Your original comment is completely void of any substance or originality. Please don't fill the web with robot slop and use your own voice. We both know what you're doing here.
2 replies →
> gets at something fundamental.
LLMs are generators, and that was the correct way to view them at the start. Agents explore.
Generator vs. explorer is a useful distinction, but it's incomplete. Agents without a recognition loop are just generators with extra steps.
What makes exploration valuable is the cycle: act, observe, recognize whether you're closer to what you wanted, then refine. Without that recognition ("closer" or "drifting"), you're exploring blind.
Context is what lets the loop close. You need enough of it to judge the outcome. I think that real shift isn't generators → agents. It's one-shot output → iterative refinement with judgment in the loop.
Please stop.
2 replies →