← Back to context

Comment by jamalaramala

18 hours ago

> Even more concerning was Devin’s tendency to press forward with tasks that weren’t actually possible. (...)

> Devin spent over a day attempting various approaches and hallucinating features that didn’t exist.

One of the big problems of GenAI is its inability to know what they don't know.

Because of that, they don't ask clarifying questions.

Humans, in the same situation, would spend a lot of time learning before they could be truly productive.

Your statement is factually wrong, Claude 3.5v2 asks clarifying questions when needed "natively", and you can add similar instructions in your prompt for any model.

  • The default system prompts are tuned for the naive case. LLMs being all purpose text handling tools, can be reprogrammed for any behavior you wish. This is the crux of skilled use of LLMs.

    The better the LLMs get, the worse the average prompt quality.