← Back to context

Comment by naasking

5 days ago

> But English is a subjective and fuzzy language, and the AI typically can't intuit the more subtle points of what you need.

I disagree on the "can't". LLMs seem no better or worse than humans at making assumptions when given a description of needs, which shouldn't be surprising since they infer such things from examples of humans doing the same thing. In principle, there's nothing preventing a targeted programming system from asking clarifying questions.

> In my experience a model's output always needs further prompting.

Yes, and the early days of all tooling were crude. Don't underestimate the march of progress.