← Back to context

Comment by lowsong

4 days ago

I'm the first to admit that I'm an AI skeptic, but this goes way beyond my views about AI and is a fundamentally unsound idea.

Let's assume that a hypothetical future AI is perfect. It will produce correct output 100% of the time, with no bugs, errors, omissions, security flaws, or other failings. It will also generate output instantly and cost nothing to run.

Even with such perfection this idea is doomed to failure because it can only write code based on information in the prompt, which is written by a human. Any ambiguity, unstated assumption, or omission would result in a program that didn't work quite right. Even a perfect AI is not telepathic. So you'd need to explain and describe your intended solution extremely precisely without ambiguity. Especially considering in this "offline generation" case there is no opportunity for our presumed perfect AI to ask clarifying questions.

But, by definition, any language which is precise and clear enough to not produce ambiguity is effectively a programming language, so you've not gained anything over just writing code.

This is so eloquently put and really describes the absurdity of the notion that code itself will become redundant to building a software system

We already have AI agents that can ask a human for help / clarification in those cases.

It could also analyze the company website, marketing materials, and so forth, and use that to infer the missing pieces. (Again, something that exists today)

  • If the AI has to ask for clarification, you can’t run it as a reproducible build step as envisaged. It’s as if your compiler would pause to ask clarifying questions on each CI run.

    If the company website, marketing materials, and so forth become part of the input, you’ll have to put those in version control as well, as any change is likely to result in a different application being generated (which may or may not be what you want).