Comment by charcircuit
2 hours ago
You are using it wrong, or are using a weak model if your failure rate is over 50%. My experience is nothing like this. It very consistently works for me. Maybe there is a <5% chance it takes the wrong approach, but you can quickly steer it in the right direction.
you are using it on easy questions. some of us are not.
I think a lot of it comes down to how well the user understands the problem, because that determines the quality of instructions and feedback given to the LLM.
For instance, I know some people have had success with getting claude to do game development. I have never bothered to learn much of anything about game development, but have been trying to get claude to do the work for me. Unsuccessful. It works for people who understand the problem domain, but not for those who don't. That's my theory.
It works for hard problems when the person already solves it and just needs the grunt work done
It also works for problems that have been solved a thousand times before, which impresses people and makes them think it is actually solving those problems
1 reply →
Don’t use it for hard questions like this then; you wouldn’t use a hammer to cut a plank, you’d try to make a saw instead