Comment by troupo
1 year ago
In order for it to boost productivity it needs to answer more than the regular questions for the top-3 languages on Stackoverflow, no?
It often fails even for those questions.
If I need to babysit it for every line of code, it's not a productivity boost.
Why does it need to answer more than that?
You underestimate the opportunity that exists for automation out there.
In my own case I've used it to make simple custom browser extensions transcribing PDFs, I don't have the time and wouldn't of made the effort to make the extension myself, the task would of continued to be done manually. It took two hours to make and it works, that's all I need in this case.
Perfection is the enemy of good.
> Perfection is the enemy of good.
Where exactly did I write anything about perfection? For me "AIs" are incapable of producing working code: https://news.ycombinator.com/item?id=41534233
You said you have to babysit each line of code, I mean this is simply untrue, if it works there's no need to babysit, the only reason you'd need to babysit every single line is if you're looking for perfection or it's something very obscure or unheard of.
Your example is perhaps valid, but there are other examples where it does work as I mentioned. I think it may be imprecise prompting, too general or with too little logic structure. It's not like Google search, the more detail and more technical you speak the better, assume it's a very precise expert. Its intelligence is very general so it needs precision to avoid confusing subject matter. A well structured logic to your request also helps as it's reasoning isn't the greatest.
Good prompting and verifying output is often still faster than manually typing it all.
2 replies →
If you need to babysit it for every line of code, you're either a superhuman coder, working in some obscure alien language, or just using the LLM wrong.
No. I'm just using for simple things like "Help me with the Elixir code" or "I need to list Bonjour services using Swift".
It's shit across the whole "AI" spectrum from ChatGPT to Copilot to Cursor aka Claude.
I'm not even talking about code I work with at work, it's just side projects.
As for "using LLMs wrong", using them "right" is literally babysitting their output and spending a lot of time trying to reverse-engineer their behavior with increasingly inane prompts.
Edit: I mean, look at this ridiculousness: https://cursor.directory/