Comment by dxroshan
13 hours ago
> What I've noticed in that respect is that I just read what it does and then immediately reason why it's there ....
How if it hallucinate and gives you wrong code and explanation? It is better to read documentations and tutorials first.
> How if it hallucinate and gives you wrong code
Then the code won't compile, or more likely your editor/IDE will say that it's invalid code. If you're using something like Cursor in agent mode, if invalid code is generated then it gets detected and the LLM keeps re-running until something is valid.
> It is better to read documentations and tutorials first.
I "trust" LLM's more than tutorials, there's so much garbage out there. For documentation, if the LLM suggests something, you can see the docstrings in your IDE. A lot of the time that's enough. If not, I usually go read the implementation if I _actually_ care about how something works, because you can't always trust documentation either.
Plenty of incorrect code compiles. It is a very bad sign that people are making comments like "Then the code won't compile".
As for my editor saying it is invalid..? That is just as untrustworthy as an LLM.
>I "trust" LLM's more than tutorials, there's so much garbage out there.
Yes, rubbish generated by AI. That is the rubbish out there. The stuff written by people is largely good.
> Plenty of incorrect code compiles. It is a very bad sign that people are making comments like "Then the code won't compile".
I interpreted the "hallucination" part as the AI using functions that don't exist. I don't consider that a problem because it's immediately obvious.
Yes, AI can suggest syntactically valid code that does the wrong thing. If it obviously does the wrong thing, then that's not really an issue either because it should be immediately obvious that it's wrong.
The problem is when it suggests something that is syntactically valid and looks like it works but is ever slightly wrong. But in my experience, it's pretty common to come across that stuff like that in "tutorials" as well.
> Yes, rubbish generated by AI. That is the rubbish out there. The stuff written by people is largely good.
I pretty strongly disagree. As soon as it became popular for developers to have a "brand", the amount of garbage started growing. The stuff written before the late 00's was mostly good, but after that the balance began slowly shifting towards garbage. AI definitely increased the rate at which garbage was generated though.
1 reply →
> Yes, rubbish generated by AI. That is the rubbish out there. The stuff written by people is largely good.
Emphatic no.
There were heaps of rubbish being generated by people for years before the advent of AI, in the name of SEO and content marketing.
I'm actually amazed at how well LLMs work given what kind of stuff they learned from.
Wait, are you saying you don't trust language servers embedded in IDEs to tell you about problems? How about syntax highlighting or linting?
Do you mean the laconic and incomplete documentation? And the tutorials that range from "here's how you do a hello world" to "draw the rest of the fucking owl" [0], with nothing in between to actually show you how to organise a code base or file structure for a mid-level project?
Hallucinations are a thing. With a competent human on the other end of the screen, they are not such an issue. And the benefits you can reap from having LLMs as a sometimes-mistaken advisory tool in your personal toolbox are immense.
[0]: https://knowyourmeme.com/memes/how-to-draw-an-owl
The kind of documentation you’re looking for is called a tutorial or a guide, and you can always buy a book for it.
Also something are meant to be approached with the correct foundational knowledge (you can’t do 3D without geometry, trigonometry, and matrixes. And a healthy dose of physics). Almost every time I see people strugling with documentation, it was because they lacked domain knowledge.
What do you do if you "hallucinate" and write the wrong code? Or if the docs/tutorial you read is out of date or incorrect or for a different version than you expect?
That's not a jab, but a serious question. We act like people don't "hallucinate" all the time - modern software engineering devops is all about putting in guardrails to detect such "hallucinations".
Fair question. So far I've seen two things:
1. Code doesn't compile. This case is obvious on what to do.
2. Code does compile.
I don't work in Cursor, I read the code quick, to see the intent. And when done with that decide to copy/paste it and test the output.
You can learn a lot by simply reading the code. For example, when I see in polars a `group_by` function call but I didn't know polars could do that, now I know because I know SQL. Then I need to check the output, if the output corresponds to what I expect a group by function to do, then I'll move on.
There comes a point in time where I need more granularity and more precision. That's the moment where I ditch the AI and start to use things such as documentation and my own mind. This happens one to two hours after bootstrapping a project with AI in a language/library/framework I initially knew nothing about. But now I do, I know a few hours worth of it. That's enough to roughly know where everything is and not be in setup hell and similar things. Moreover, by just reading the code, I get a rough idea on how beginner to intermediate programmers think about the problem space the code is written in as there's always a certain style of writing certain code. This points me into the direction on how to think about it. I see it as a hint, not as the definitive answer. I suspect that experts think differently about it, but given that I'm just a "few hours old" in the particular language/lib/framework, I think knowing all of this is already really amazing.
AI helps with quicker bootstrapping by virtue of reading code. And when it gets actually complicated and/or interesting, then I ditch it :)
Even when it hallucinates it still solves most of the unknown unknowns which is good for getting you unblocked. It's probably close enough to get some terms to search for.
I don't think so.How can you be so sure it solves the 'unknown unknowns'?
Sample size of 1, but it definitely did in my case. I've gained a lot more confidence when coding in domains or software stacks I've never touched before, because I know I can trust an LLM to explain things like the basic project structure, unfamiliar parts of the ecosystem, bounce ideas off off, produce a barebones one-file prototype that I rewrite to my liking. A whole lot of tasks that simply wouldn't justify the time expenditure and would make it effort-prohibitive to even try to automate or build a thing.
Because I've used it for problems where it hallucinated some code that didn't actually exist but that was good enough to know what the right terms to search for in the docs were.
1 reply →
Most tutorials fail to add meta info like the system they're using and versions of things, that can be a real pain.