← Back to context

Comment by ehnto

11 hours ago

> help the "AI can't do ____" crowd grasp that it really is a skill issue when they can't get any good results from LLMs.

I think the issue with this conversation is that no one tells you what they are working on, and I suspect there is both a skill gap in usage, but also a lack of capability in the LLMs, both surfacing as the same outcome.

There is definitely stuff an LLM cannot do on its own, at which point is the LLM really achieving the outcome or is it the human just achieving it by backseat driving. Much like a senior telling the junior how to solve a tricky bug, you wouldn't say the junior came up with the solution and therefore you would not say the junior is capable of solving the bug.