Comment by Eisenstein
3 days ago
This is a typical problem you see in autodidacts. They will recreate solutions to solved problems, trip over issues that could have been avoided, and generally do all of things you would expect someone to do if they are working with skill but no experience.
LLMs accelerate this and make it more visible, but they are not the cause. It is almost always a person trying to solve a problem and just not knowing what they don't know because they are learning as they go.
I am hopeful autodidacts will leverage an LLM world like they did with an Internet search world from a library world from a printed word world. Each stage in that progression compressed the time it took for them to encompass a span of comprehension of a new body of understanding before applying to practice, expanded how much they applied the new understanding to, and deepened their adoption scope of best practices instead of reinventing the wheel.
In this regard, I see LLM's as a way for us to way more efficiently encode, compress, convey and enable operational practice our combined learned experiences. What will be really exciting is watching what happens as LLM's simultaneously draw from and contribute to those learned experiences as we do; we don't need full AGI to sharply realize massive benefits from just rapidly, recursively enabling a new highly dynamic form of our knowledge sphere that drastically shortens the distance from knowledge to deeply-nuanced praxis.
> [The cause] is almost always a person trying to solve a problem and just not knowing what they don't know because they are learning as they go.
Isn't that what "using an LLM" is supposed to solve in the first place?
With the right prompt the LLM will solve it in the first place. But this is an issue of not knowing what you don't know, so it makes it difficult to write the right prompt. One way around this is to spawn more agents with specific tasks, or to have an agent that is ONLY focused on finding patterns/code where you're reinventing the wheel.
I often have one agent/prompt where I build things but then I have another agent/prompt where their only job is to find codesmells, bad patterns, outdated libraries, and make issues or fix these problems.
1. LLMs can't watch over someone and warn them when they are about to make a mistake
2. LLMs are obsequious
3. Even if LLMs have access to a lot of knowledge they are very bad at contextualizing it and applying it practically
I'm sure you can think of many other reasons as well.
People who are driven to learn new things and to do things are going to use whatever is available to them in order to do it. They are going to get into trouble doing that more often than not, but they aren't going to stop. No is helping the situation by sneering at them -- they are used it to it, anyway.
My impression is that LLM users are the kind of people that HATED that their questions on StackOverflow got closed because it was duplicated.
> My impression is that LLM users are the kind of people that HATED that their questions on StackOverflow got closed because it was duplicated.
Lol, who doesn't hate that?
I don't know, in 40 years codding I never had to ask a question there.
So literally everyone in the world? Yeah, seems right!
I would love to see your closed SO questions.
But don't worry, those days are over, the LLMs it is never going to push back on your ideas.
1 reply →