Comment by th0raway
2 days ago
We opened the Cloud Code floodgates all at once in my org. After a few months we looked at stats, and asked managers for impressions on performance changes. The API cost per engineer doesn't correlate with the apparent increases in performance, but it sure seems that the vast majority of people that used to have good reviews got a lot better, while the bottom third just didn't, even though they use the LLMs about as much. It makes the performance differences in teams look like an abyss. Someone appears stuck in a task, and we see what they've been prompting, and then one of the best seniors comes in, actually asks the questions well, and the LLM does all the debugging and all the fixing in 20 minutes.
It's not that the best performers are magical prompt engineers providing detailed instructions: They ask better questions that the LLM knows how to try to answer, and provide the specific information that the LLM would take a while finding. It's as if some people just had no "theory of mind" of the LLM, and what it can know, and others just do. It's not a living thing or anything like that, but it's still so useful to predict it, put yourself in it's shoes, so to speak. Just like you'd do with a new hire, or a random junior.
This comment is buried deep but I think it's actually quite important. In 2005 you had the elderly googling "Can I have a recipe for an apple pie? Thank you." vs kids doing "apple pie recipe" and clicking the first result. Some (most?) people just weren't capable of conceptualizing the abstract idea of "internet search" so they talked to the machine like they'd talk to other humans.
Until coronavirus virtually anyone regardless of mental skills could get a high-paying job as a coder, there was no filter at all.
What we'll observe now is the split between those who can conceptualize the idea of an AI, and those who cannot. The latter group will be stuck talking to AI in a way that doesn't leverage how it actually works.