Comment by pdimitar
9 months ago
As a caveat, I told it to make minimal code for one task and it completely skipped a super important aspect of it, justifying it by saying that I said "minimal".
Not cool, Claude 3.7, not cool.
9 months ago
As a caveat, I told it to make minimal code for one task and it completely skipped a super important aspect of it, justifying it by saying that I said "minimal".
Not cool, Claude 3.7, not cool.
Doesn't trading prompt patches trying to get around undefined behavior from the model make you wonder if this is a net positive?
Huh? I'm not even sure what you said, can you clarify?
I thought the value proposition of using LLMs to code is the lesser cognitive load of just describing what you want in natural language. But if it turns out writing the prompt is so involved, you end up trading snippets on forums and you often run into undefined behavior (the thing you described turned out to be ambiguous to the LLM and it gave you something you did not expect at all)...
I have to wonder, wouldn't just writing the code be more productive in the end?
5 replies →