Comment by Spivak
2 months ago
Even when it hallucinates it still solves most of the unknown unknowns which is good for getting you unblocked. It's probably close enough to get some terms to search for.
2 months ago
Even when it hallucinates it still solves most of the unknown unknowns which is good for getting you unblocked. It's probably close enough to get some terms to search for.
Have you tried using AI only for things you already know for a while? I almost only do so (because I haven't found that LLMs speed up my actual process much) and I can tell you that the things that LLMs generally leave out/forget/don't "know" about are plentiful, they will result in tons of debugging and usually require me to "metagame" heavily and ask pointed questions that someone who didn't have my knowledge simply wouldn't know to ask in order to solve the issues with the code they generate. A LLM can't even give you basic OpenGL code in C for doing some basic framebuffer blitting without missing stuff that'll cost you potentially hours or a whole day in debugging time.
Add to this that someone who uses a LLM to "just do things" for them like this is very unlikely to have much useful knowledge and so can't really resolve these issues themselves it's a recipe for disaster and not at all a time saver over simply learning and doing yourself.
For what it's worth I've found that LLMs are pretty much only good for well understood basic theory that can give you a direction to look in and that's about it. I used to use GitHub Copilot (which years ago was (much?) better than Cursor with Claude Sonnet just a few months ago) to tab complete boilerplate and stuff but concluded that overall, I wasn't really saving time and energy because as nice as tab-completing boilerplate sometimes was, it also invariably turned into "It suggested something interesting, let's see if I can mold it into something useful" taking up valuable time, leading nowhere good in general and just generally being disruptive.
I don't think so.How can you be so sure it solves the 'unknown unknowns'?
Sample size of 1, but it definitely did in my case. I've gained a lot more confidence when coding in domains or software stacks I've never touched before, because I know I can trust an LLM to explain things like the basic project structure, unfamiliar parts of the ecosystem, bounce ideas off off, produce a barebones one-file prototype that I rewrite to my liking. A whole lot of tasks that simply wouldn't justify the time expenditure and would make it effort-prohibitive to even try to automate or build a thing.
Because I've used it for problems where it hallucinated some code that didn't actually exist but that was good enough to know what the right terms to search for in the docs were.
I interpreted that as you rushing to code something you should have approached with a book or a guide first.