Comment by doix
13 hours ago
> How if it hallucinate and gives you wrong code
Then the code won't compile, or more likely your editor/IDE will say that it's invalid code. If you're using something like Cursor in agent mode, if invalid code is generated then it gets detected and the LLM keeps re-running until something is valid.
> It is better to read documentations and tutorials first.
I "trust" LLM's more than tutorials, there's so much garbage out there. For documentation, if the LLM suggests something, you can see the docstrings in your IDE. A lot of the time that's enough. If not, I usually go read the implementation if I _actually_ care about how something works, because you can't always trust documentation either.
Plenty of incorrect code compiles. It is a very bad sign that people are making comments like "Then the code won't compile".
As for my editor saying it is invalid..? That is just as untrustworthy as an LLM.
>I "trust" LLM's more than tutorials, there's so much garbage out there.
Yes, rubbish generated by AI. That is the rubbish out there. The stuff written by people is largely good.
> Plenty of incorrect code compiles. It is a very bad sign that people are making comments like "Then the code won't compile".
I interpreted the "hallucination" part as the AI using functions that don't exist. I don't consider that a problem because it's immediately obvious.
Yes, AI can suggest syntactically valid code that does the wrong thing. If it obviously does the wrong thing, then that's not really an issue either because it should be immediately obvious that it's wrong.
The problem is when it suggests something that is syntactically valid and looks like it works but is ever slightly wrong. But in my experience, it's pretty common to come across that stuff like that in "tutorials" as well.
> Yes, rubbish generated by AI. That is the rubbish out there. The stuff written by people is largely good.
I pretty strongly disagree. As soon as it became popular for developers to have a "brand", the amount of garbage started growing. The stuff written before the late 00's was mostly good, but after that the balance began slowly shifting towards garbage. AI definitely increased the rate at which garbage was generated though.
> Yes, AI can suggest syntactically valid code that does the wrong thing
To be fair, I as a dev with ten or fifteen years experience I do that too. That's why I always have to through test the results of new code before pushing to production. People act as if using AI should remove that step, or alternatively, as if it suddenly got much more burdensome. But honestly it's the part that has changed least for me since adopting an AI in the loop workflow. At least the AIncan help with writing automated tests now which helps a bit.
> Yes, rubbish generated by AI. That is the rubbish out there. The stuff written by people is largely good.
Emphatic no.
There were heaps of rubbish being generated by people for years before the advent of AI, in the name of SEO and content marketing.
I'm actually amazed at how well LLMs work given what kind of stuff they learned from.
Wait, are you saying you don't trust language servers embedded in IDEs to tell you about problems? How about syntax highlighting or linting?