← Back to context

Comment by ReverseCold

6 hours ago

> I can run and test the code at my discretion, whereas the AI model can't.

It sounds like you know what the problem with your AI workflow is? Have you tried using an agent? (sorry somewhat snarky but… come on)

Yeah, you're right, and the snark might be warranted. I should consider it the same as my stupid (but cute) robot vacuum cleaner that goes at random directions but gets the job done.

The thing that differentiates LLM's from my stupid but cute vacuum cleaner, is that the (at least OpenAI's) AI model is cocksure and wrong, which is infinitely more infuriating than being a bit clueless and wrong.

  • I've been trying to solve this by wrapping the generation in a LangGraph loop. The hope was that an agent could catch the errors, but it seems to just compound the problem. You end up paying for ten API calls where the model confidently doubles down on the mistake, which gets expensive very quickly for no real gain.