← Back to context

Comment by iamleppert

21 hours ago

There's nothing stopping you from coding if you enjoy it. It's not like they have taken away your keyboard. I have found that AI frees me up to focus on the parts of coding I'm actually interested in, which is maybe 5-10% of the project. The rest is boiler plate, cargo-culted, Dockerfile, build system and bash environment variable passing circle of hell that I really could care less about. I care about certain things that I know will make the product better, and achieve its goals in a clever and satisfying way.

Even when I'm stuck in hell, fighting the latest undocumented change in some obscure library or other grey-bearded creation, the LLM, although not always right, is there for me to talk to, when before I'd often have no one. It doesn't judge or sneer at you, or tell you to "RTFM". It's better than any human help, even if its not always right because its at least always more reliable and you don't have to bother some grey beard who probably hates you anyway.

> The rest is boiler plate, cargo-culted, Dockerfile, build system and bash environment variable passing circle of hell that I really could care less about.

Even more so, I remember making a Chrome extension and feeling intimidated. I knew that I'd be comfortable with most of it given that JS is used but I just didn't know how to start.

With an LLM it is way faster to spin up some default config and get going versus reading a tutorial. What I've noticed in that respect is that I just read what it does and then immediately reason why it's there. "Oh, there's a manifest.json file with permissions and a few other things, fair, makes sense. Oh, so you have the HTML/CSS/JS of the extension, you have the HTML/CSS/JS of the page you're injecting some code into and you have the JS of a background worker. Ah yea, I get that."

And then I just get immediately on coding.

  • > What I've noticed in that respect is that I just read what it does and then immediately reason why it's there ....

    How if it hallucinate and gives you wrong code and explanation? It is better to read documentations and tutorials first.

    • > How if it hallucinate and gives you wrong code

      Then the code won't compile, or more likely your editor/IDE will say that it's invalid code. If you're using something like Cursor in agent mode, if invalid code is generated then it gets detected and the LLM keeps re-running until something is valid.

      > It is better to read documentations and tutorials first.

      I "trust" LLM's more than tutorials, there's so much garbage out there. For documentation, if the LLM suggests something, you can see the docstrings in your IDE. A lot of the time that's enough. If not, I usually go read the implementation if I _actually_ care about how something works, because you can't always trust documentation either.

      4 replies →

    • Do you mean the laconic and incomplete documentation? And the tutorials that range from "here's how you do a hello world" to "draw the rest of the fucking owl" [0], with nothing in between to actually show you how to organise a code base or file structure for a mid-level project?

      Hallucinations are a thing. With a competent human on the other end of the screen, they are not such an issue. And the benefits you can reap from having LLMs as a sometimes-mistaken advisory tool in your personal toolbox are immense.

      [0]: https://knowyourmeme.com/memes/how-to-draw-an-owl

      1 reply →

    • Fair question. So far I've seen two things:

      1. Code doesn't compile. This case is obvious on what to do.

      2. Code does compile.

      I don't work in Cursor, I read the code quick, to see the intent. And when done with that decide to copy/paste it and test the output.

      You can learn a lot by simply reading the code. For example, when I see in polars a `group_by` function call but I didn't know polars could do that, now I know because I know SQL. Then I need to check the output, if the output corresponds to what I expect a group by function to do, then I'll move on.

      There comes a point in time where I need more granularity and more precision. That's the moment where I ditch the AI and start to use things such as documentation and my own mind. This happens one to two hours after bootstrapping a project with AI in a language/library/framework I initially knew nothing about. But now I do, I know a few hours worth of it. That's enough to roughly know where everything is and not be in setup hell and similar things. Moreover, by just reading the code, I get a rough idea on how beginner to intermediate programmers think about the problem space the code is written in as there's always a certain style of writing certain code. This points me into the direction on how to think about it. I see it as a hint, not as the definitive answer. I suspect that experts think differently about it, but given that I'm just a "few hours old" in the particular language/lib/framework, I think knowing all of this is already really amazing.

      AI helps with quicker bootstrapping by virtue of reading code. And when it gets actually complicated and/or interesting, then I ditch it :)

    • What do you do if you "hallucinate" and write the wrong code? Or if the docs/tutorial you read is out of date or incorrect or for a different version than you expect?

      That's not a jab, but a serious question. We act like people don't "hallucinate" all the time - modern software engineering devops is all about putting in guardrails to detect such "hallucinations".

    • Even when it hallucinates it still solves most of the unknown unknowns which is good for getting you unblocked. It's probably close enough to get some terms to search for.

      5 replies →

    • Most tutorials fail to add meta info like the system they're using and versions of things, that can be a real pain.

I think the fear for those of us who love coding, stability and security, that we are going to be confronted with apples that are rotten on the inside and our work, our love, is going to turn (even more so) into pain. The challenge in computing is that the powers that decide have little overview over the actual quality and longevity of any endeavour.

I work as a consultant assessing other people's code and it's hard not to lose my religion, sort of speak.

So much this. The AI takes care of the tedious line by line what’s-the-name-of-that-stdlib-function parts (and most of the tedious test-writing parts) and lets me focus on the interesting bits like what it is I want to build and how the pieces should fit together. And debugging, which I find satisfying.

Sadly, I find it sorely lacking at dealing with build systems and that particular type of boilerplate, mostly because it seems to mix up different versions of things too much and gives you totally broken setups more often than not. I’d just as soon never deal with the he’ll that is front end build/lint/test config again.

  • > The AI takes care of the tedious line by line what’s-the-name-of-that-stdlib-function parts (and most of the tedious test-writing parts)

    AI generated tests are a bad idea.

    • Just as with anything else AI, you never accept test code without reviewing it. And often it needs debugging. But it handles about 90% of it correctly and saves a lot of time and aggravation.

  • Aren't stdlib functions the ones you know by heart after a while anyways?

    • Depends on the language. Python for instance has a massive default library, and there are entire modules I use anywhere from one a year to once a decade —- or never at all until some new project needs them.

They perhaps haven’t taken away your keyboard but anecdotally, a few friends work at places where their boss is requiring them to use the LLMs. So you may not have to code with them but some people are starting to be chained to them.

  • Yes, there are bad places to work. There are also places that require detailed time tracking, do not allow any time to write tests, have very long hours, tons of on-call alerts, etc.

    • How long until it becomes the rule because of some arbitrary "productivity" metric? Sure, you may not be forced to use it, but you'll be fire for being "unproductive".

    • You write that like the latter is in opposition to the former. Yet the content suggests the latter is the former

  • And even when that's not the case you are still indirectly working with them because your coworker is and "somehow" their code has gotten worse

> The rest is boiler plate, cargo-culted, Dockerfile, build system and bash environment variable passing

I keep seeing people saying to use an LLM to write boilerplate, but like... do you not just copy that from another project where you already wrote it?

  • No, because it's usually a few years old and already obsolete - the frameworks and the language have gone through a gazillion changes and what you did in 2021 suddenly no longer works at all.

    • I mean, the training data also has a cutoff date and changed beyond that are not reflected in the code suggestions.

      Also, I know that people love to joke on modern software and JS in particular. But if you take react code from 2020 and drop it into a new react codebase it still works. Even class based components work. Yes, if you jumped on the newest framework bandwagon every time stuff will break all the time, but AI won’t be able to help you with that either. If you went for relatively stable frameworks, you can re use boilerplate completely or with relatively minimal adjustments

      3 replies →

    • lol, I've been cutting and pasting from the same projects I started in 2010. When you work in vanilla js it doesn't change.

  • I keep seeing that suggestion as well and the only sensible way I see would be to use one off boilerplate, anything else does not make sense.

    If you keep re-using boilerplate once in a while copying it from elsewhere is fine. If you re-use it all the time, just get a macro setup in your editor of choice. IMHO that is way more efficient than asking AI to produce somewhat consistent boilerplate

    • You know. I have my boilerplate in Rails and it is just a work of art... I simply clone my BP repo, bundle, migrate, run and I have user management, auth, smtp client, sms alerts, and literally everything I need to get started. And it was just this same week I decided to try a code assistant, and my result was shockingly good, once you provide the assistant with a good clean starting point, and if you are very clear on what you want to build, then the results are just too good to be dismissed.

      So yes, boilerplate, but also yes, there is definitely something to be gained from using ai assistants.

Like many others writing here, I enjoy coding (well, mostly anyway), especially the when it requires deep thought and patient experimentation to get anywhere. It's also great to preside over finally wiring together the routines (modules, libraries) that bind a project into a coherent whole.

Haven't much used AI to assist. After all, hard enough finding authentic humans capable and willing to voluntarily review/critique one's code. So far AI doesn't consistently provide that kind of help. OTOH seems almost certain over time AI systems will improve in terms of specific and comprehensive "insights" into the particular types of code one is writing.

I think an issue is that human creativity is hard to measure. Likely enough AI is even tougher to assess. Probably AI will increasingly be assigned tasks like constructing project skeletons, assuring parts can be joined together without undue strain, handling "boilerplate" and other routine chores. To be sure the landscape will look different in 50 years, I'm certain we'd be amazed were we able to see what future systems will be doing.

In any case, we shouldn't hesitate to use tools that genuinely boost our creativity. One badly needed role would be enabling development of higher reliability software. Still that's a far cry from the contributions emanating from the best of human originality, talent and motivation.

> doesn't judge or sneer at you, or tell you to "RTFM". It's better than any human help, even if its not always right because its at least always more reliable and you don't have to bother some grey beard who probably hates you anyway.

That’s a lot of trauma you’re dealing with.

I read one characterization which is that LLMs don't give new information (except to the user learning) but they reorganize old information.

  • That’s only true if you tokenize words rather than characters. Character tokenization generates new content outside the training vocabulary.

    • All major tokenisers have explicit support for encoding arbitrary byte sequences. There's usually a consecutive range of tokens reserved for 0x00 to 0xFF, and you can encode any novel UTF-8 words or structures with it. Including emoji and characters that weren't a part of the model's initial training, if you show it some examples.

      1 reply →

I think you're robbing yourself.

Of course, it all depends how you use the LLM. While the same can be true for StackOverflow, the LLMs just scale the issues up.

  > The rest is boiler plate, cargo-culted, Dockerfile, build system and bash environment variable passing circle of hell that I really could care less about.

Except you do care. It's why you're frustrated and annoyed. And good!!! That feeling is because what you're describing requires solving. If something is routine, automate it. But it's really not good to automate in a statistical way, especially when that statistical tool is optimized for human preference. Because remember that also means mistakes are optimized to be missed by humans.[0]

With expertise in anything, I'm sorry, but you also got to do the shit work. To be a great musician you gotta practice boring scales. It's true even if you just want to be a sub par one.

But a little grumpy is good. It drives you to fix things, and frankly, that's our job. The things that are annoying and creating friction don't need be repeated over and over, they need alternative solutions. The scripts you build are valuable. The "useless" knowledge you gain isn't so useless. Those little details add up without you knowing and make you better.

That undocumented code makes you frustrated and reminds you to document your own. You don't want to be a hypocrite. The author of the thing you're using probably thought the same thing: "No one is gonna use this garbage, I'm not going to waste my time documenting it". Yet here we are. Over and over again yet we don't learn the lesson.

I'm not gonna deny there's assholes. There are. But even assholes teach you. At worst, they teach you how not to act.

And some people are you telling you to RTM and not RTFM. Sure, it has lots of extra information in it that you don't need to get your specific job done, but have you also considered that it has lots of extra information in it? The person that wrote it clearly thought the context was important. Maybe it isn't. In that case, you learned a lesson in how not to write documentation!

What I'm getting at is that there's a lot of learning done all over the place. Trying to take out all the work and only have "the fun" is harming yourself and has a large potential to make less time for the fun stuff[0]. I'd be surprised if I'm alone in this, but a lot of stuff I enjoy now was stuff that originally frustrated me. IME this is pretty common! It's true for every person I know. Similarly, it's also true for things I learned that I thought I'd never use again. It always has a way of coming back.

I'm not going to pretend it's all fun and games. I'm explicitly saying it's not. But I'm confident in the long run it's better. Despite the lack of accuracy, I use LLMs (and Google, and even the TFM) like I would a solution guide the homework problems when I was in school. Try first, then consult. The struggle is an investment in your future. It sucks, but if all the best things in life were easy then we'd all have them. I'm just trying to convince you that it pays off.

I'm completely aware this is all context dependent. There's a time and place for everything. But given the percentages you mention (even taken as exaggeration), something sounds wrong. It's hard to suggest specific solutions without details but I'd be surprised if there weren't better and more rewarding solutions than having the LLM do it for you

[0] That's the big danger and what drives distrust in them. Because you need to work extra hard to find mistakes, increasing workload, not decreasing, because debugging is most of the job!

This.

Frankly I don't want to spend 2 hours reading documentation just to find out some arcane incantation that gets the computer to do what I want it to do.

The interesting part of programming to me is designing the logic. It's the 'this, then that, except when this' flow that I'm really interested in, not the search for some obscure library that has some function that will parse this csv.

Llms are great for that, and let me get away from the pointless grind and into the things that I enjoy and are actually providing value.

The pair programming is also a super good thing. I work best when I can spitball and throw out random ideas and get quick feedback. Llms let me do that without bothering others who have their own work to do.