Comment by thoughtpeddler
6 days ago
I was recently "pair-vibe-coding" with someone who's much 'smarter' than I am, certainly when it comes to coding, especially Python. He's always been in the LLM skeptic camp, and it was fascinating to see that because of his extensive Python knowledge, his prompting was actually very minimal and weak, one might even say 'lazy'. The output he got from o3 for our given task was therefore really mediocre, and had a few hallucinations (which could've been avoided if he spent a few more seconds or minutes prompting).
I, knowing far less than him, would've had a much more elaborate prompt, and o3 would've proved a lot more competent/capable. Yet with my friend, since he knows so much already, and has such a high bar, he thinks the AI should be able to do a lot more with just a few basic words in a prompt... yet, for that same reason, he (understandably) doubts the inevitable sub-par output.
That's what makes all these debates about "Why are smart people doubting LLMs??" so pointless. The smarter you are, the less help you need, so the less prompting you do, the less context the model has, the less impressive the output, and the more the smart person thinks LLMs suck. With this logic, of course the smartest people are also the biggest skeptics!
I doubt this holds true generally. The smart coders I know who are also LLM users generally develop a decent intuition for what the models are good and bad at, and how to steer them into good performance.
Then perhaps my friend has remained a skeptic for so long that he's atrophied in this regard (which OP's post touches on). Either way, most of his day job is as a CTO/manager at a startup, so he's not in the weeds coding as much anymore in the first place. I should've seen how he prompts LLMs for managerial tasks, then I'd know whether his 'prompt laziness' was systemic or influenced by his coding knowledge.