← Back to context

Comment by melvinroest

10 hours ago

> The rest is boiler plate, cargo-culted, Dockerfile, build system and bash environment variable passing circle of hell that I really could care less about.

Even more so, I remember making a Chrome extension and feeling intimidated. I knew that I'd be comfortable with most of it given that JS is used but I just didn't know how to start.

With an LLM it is way faster to spin up some default config and get going versus reading a tutorial. What I've noticed in that respect is that I just read what it does and then immediately reason why it's there. "Oh, there's a manifest.json file with permissions and a few other things, fair, makes sense. Oh, so you have the HTML/CSS/JS of the extension, you have the HTML/CSS/JS of the page you're injecting some code into and you have the JS of a background worker. Ah yea, I get that."

And then I just get immediately on coding.

> What I've noticed in that respect is that I just read what it does and then immediately reason why it's there ....

How if it hallucinate and gives you wrong code and explanation? It is better to read documentations and tutorials first.

  • > How if it hallucinate and gives you wrong code

    Then the code won't compile, or more likely your editor/IDE will say that it's invalid code. If you're using something like Cursor in agent mode, if invalid code is generated then it gets detected and the LLM keeps re-running until something is valid.

    > It is better to read documentations and tutorials first.

    I "trust" LLM's more than tutorials, there's so much garbage out there. For documentation, if the LLM suggests something, you can see the docstrings in your IDE. A lot of the time that's enough. If not, I usually go read the implementation if I _actually_ care about how something works, because you can't always trust documentation either.

    • Plenty of incorrect code compiles. It is a very bad sign that people are making comments like "Then the code won't compile".

      As for my editor saying it is invalid..? That is just as untrustworthy as an LLM.

      >I "trust" LLM's more than tutorials, there's so much garbage out there.

      Yes, rubbish generated by AI. That is the rubbish out there. The stuff written by people is largely good.

      3 replies →

  • Do you mean the laconic and incomplete documentation? And the tutorials that range from "here's how you do a hello world" to "draw the rest of the fucking owl" [0], with nothing in between to actually show you how to organise a code base or file structure for a mid-level project?

    Hallucinations are a thing. With a competent human on the other end of the screen, they are not such an issue. And the benefits you can reap from having LLMs as a sometimes-mistaken advisory tool in your personal toolbox are immense.

    [0]: https://knowyourmeme.com/memes/how-to-draw-an-owl

    • The kind of documentation you’re looking for is called a tutorial or a guide, and you can always buy a book for it.

      Also something are meant to be approached with the correct foundational knowledge (you can’t do 3D without geometry, trigonometry, and matrixes. And a healthy dose of physics). Almost every time I see people strugling with documentation, it was because they lacked domain knowledge.

  • Fair question. So far I've seen two things:

    1. Code doesn't compile. This case is obvious on what to do.

    2. Code does compile.

    I don't work in Cursor, I read the code quick, to see the intent. And when done with that decide to copy/paste it and test the output.

    You can learn a lot by simply reading the code. For example, when I see in polars a `group_by` function call but I didn't know polars could do that, now I know because I know SQL. Then I need to check the output, if the output corresponds to what I expect a group by function to do, then I'll move on.

    There comes a point in time where I need more granularity and more precision. That's the moment where I ditch the AI and start to use things such as documentation and my own mind. This happens one to two hours after bootstrapping a project with AI in a language/library/framework I initially knew nothing about. But now I do, I know a few hours worth of it. That's enough to roughly know where everything is and not be in setup hell and similar things. Moreover, by just reading the code, I get a rough idea on how beginner to intermediate programmers think about the problem space the code is written in as there's always a certain style of writing certain code. This points me into the direction on how to think about it. I see it as a hint, not as the definitive answer. I suspect that experts think differently about it, but given that I'm just a "few hours old" in the particular language/lib/framework, I think knowing all of this is already really amazing.

    AI helps with quicker bootstrapping by virtue of reading code. And when it gets actually complicated and/or interesting, then I ditch it :)

  • What do you do if you "hallucinate" and write the wrong code? Or if the docs/tutorial you read is out of date or incorrect or for a different version than you expect?

    That's not a jab, but a serious question. We act like people don't "hallucinate" all the time - modern software engineering devops is all about putting in guardrails to detect such "hallucinations".

  • Even when it hallucinates it still solves most of the unknown unknowns which is good for getting you unblocked. It's probably close enough to get some terms to search for.

  • Most tutorials fail to add meta info like the system they're using and versions of things, that can be a real pain.