Comment by hunterpayne

2 days ago

Its great for demos, its lousy for production code. The different cost of errors in these two use cases explains (almost) everything about the suitability of AI for various coding tasks. If you are the only one who will ever run it, its a demo. If you expect others to use it, its not.

As the name indicates, a demo is used for demonstration purposes. A personal tool is not a demo. I've seen a handful of folks assert this definition, and it seems like a very strange idea to me. But whatever.

Implicit in your claim about the cost of errors is the idea that LLMs introduce errors at a higher rate than human developers. This depends on how you're using the LLMs and on how good the developers are. But I would agree that in most cases, a human saying "this is done" carries a lot more weight than an LLM saying it.

Regardless, it is not good analysis to try to do something with an LLM, fail, and conclude that LLMs are stupid. The reality is that LLMs can be impressively and usefully effective with certain tasks in certain contexts, and they can also be very ineffective in certain contexts and are especially not great about being sure whether they've done something correctly.

  • > But I would agree that in most cases, a human saying "this is done" carries a lot more weight than an LLM saying it.

    That's because humans have stakes. If a human tells me something is done and I later find out that it isn't, they damage their credibility with me in the future - and they know that.

    You can't hold an LLM accountable.