Comment by quirino

2 days ago

> As a bonus, we look forward to fewer violations (exhibit A, B, C) of our strict no LLM / no AI policy,

Hilarious how the offender on "exhibit A" [1] is the same one from the other post that made the frontpage a couple of days ago [2].

[1] https://news.ycombinator.com/item?id=46039274

My old rule about the difference between coding and software engineering:

  For coding, "it seems to work for me" is good enough. For software engineering, it's not.

My new rule:

  For coding, you can use AI to write your code. For software engineering, you can't.

  • > For coding, you can use AI to write your code. For software engineering, you can't.

    You can 100% use AI for software engineering. Just not by itself, you need to currently be quite engaged in the process to check it and redirect it.

    But AI lowers the barrier to writing code and thus it brings people will less rigour to the field and they can do a lot of damage. But it isn't significantly different than programming languages made coding more accessible than assembly language - and I am sure that this also allowed more people to cause damage.

    You can use any tools you want, but you have to be rigorous about it no matter the tool.

  • > For coding, you can use AI to write your code. For software engineering, you can't.

    This is a pretty common sentiment. I think it equates using AI with vibe-coding, having AI write code without human review. I'd suggest amending your rule to this:

    > For coding, you can use AI. For software engineering, you can't.

    You can use AI in a process compatible with software engineering. Prompt it carefully to generate a draft, then have a human review and rework it as needed before committing. If the AI-written code is poorly architected or redundant, the human can use the same AI to refactor and shape it.

    Now, you can say this negates the productivity gains. It will necessarily negate some. My point is that the result is comparable to human-written software (such as it is).

    • 100% this.

      Just don't expect to get decent code often if you mostly rely on something like cursor's default model.

      You literally get what you pay for.

  • I absolutely don't care about how people generate code, but they are responsible for every single line they push for review or merge.

    That's my policy in each of my clients and it works fine, if AI makes something simpler/faster, good for the author, but there's 0, none, excuses for pushing slop or code you haven't reviewed and tested yourself thoroughly.

    If somebody thinks they can offset not just authoring or editing code, but also taking the responsibility for it and the impact it has on the whole codebase and the underlying business problem they should be jobless ASAP as they are de facto delegating the entirety of their job to a machine, they are not only providing 0 value, but negative value in fact.

    • Totally agree. For me, the hard part has been figuring out the distinction with junior engineers... Is this poorly thought out, inefficient solution that is 3x as long as necessary due to AI, or inexperience?

      15 replies →

  • I feel like the distinction is equivalent to

        LLMs can make mistakes. Humans can't.
    

    Humans can and do make mistakes all the time. LLMs can automate most of the boring stuff, including unit tests with 100% coverage. They can cover edge cases you ask them to and they can even come up with edge cases you may not have thought about. This leaves you to do the review.

    I think think the underlying problem people have is they don't trust themselves to review code written by others as much as they trust themselves to implement the code from scratch. Realistically, a very small subset of developers do actual "engineering" to the level of NASA / aerospace. Most of us just have inflated egos.

    I see no problem modelling the problem, defining the components, interfaces, APIs, data structures, algorithms and letting the LLM fill in the implementation and the testing. Well designed interfaces are easy to test anyway and you can tell at a glance if it covered the important cases. It can make mistakes, but so would I. I may overlook something when reviewing, but the same thing often happens when people work together. Personally I'd rather do architecture and review at a significantly improved speed than gloat I handcrafted each loop and branch as if that somehow makes the result safer or faster (exceptions apply, ymmv).

    • "I feel like these distinctions are equivalent to

          LLMs can make mistakes. Humans can't."
      

      No, that's not it. The difference between humans and AI is that AI suffers no embarrassment or shame when it makes mistakes, and the humans enthusiastically using AI don't seem to either. Most humans experience a quick and viseral deterrent when they publish sloppy code and mistakes are discovered. AI, not at all. It does not immediately learn from its mistakes like most humans do.

      In the rare case when there is a human that is consistently persistently confidently wrong like AI, a project can identify that person and easily stop wasting their time working with that person. With masses of people being told by the vocal AI shills how amazing AI is, projects can easily be flooded with confidently wrong aaI generated PRs.

    • If unit tests are boring chores for you, or 100% coverage is somehow a goal in itself, then your understanding of quality software development is quite lacking overall. Tests are specifications: they define behavior, set boundaries, and keep the inevitable growth of complexity under control. Good tests are what keep a competent developer sane. You cannot build quality software without starting from tests. So if tests are boring you, the problem is your approach to engineering. Mature developers dont get bored chasing 100% coverage – they focus on meaningful tests that actually describe how the program is supposed to work.

      1 reply →

    • > LLMs can automate most of the boring stuff, including unit tests with 100% coverage. They can cover edge cases you ask them to and they can even come up with edge cases you may not have thought about. This leaves you to do the review.

      in my experience these tests don't test anything useful

      you may you have 100% test coverage, but it's almost entirely useless but not testing the actual desired behaviour of the system

      rather just the exact implementation

      1 reply →

Check out this dude: https://github.com/GhostKellz?tab=repositories

He's got like 50 repos with vibe-coded, non-working Zig and Rust projects. And he clearly manages to confuse people with it:

https://github.com/GhostKellz/zquic/issues/2

  • I don’t think this is uncommon. At one point Lemmy was a project with thousands of stars and literally no working code until finally someone other than the owner adopted it and merged in a usable product.

  • Wow, and if you go to their website listed in they're profile, not only do almost none of the links work, the one that did just linked out to the generic template that it was straight copied from. Wow.

  • Hustle hustle. I'm not disgusted by this person, but by the system that promotes or requires such behaviour.

oh god... he has a humongous AI generated PR for julia too https://github.com/tshort/StaticCompiler.jl/pull/180

  • More context/discussion on this: https://discourse.julialang.org/t/ai-generated-enhancements-...

    (Honestly, that's a lot more patience than I'd be able to give what are mostly AI-generated replies, so kudos to these folk.)

    • When confronted about LLM writing completely broken tests the guy said the funniest thing: "It knows what it’s doing but tends to be… lazy."

      I'm a big fan of LLMs but this guy is just a joke. He understand nothing of the code the LLM generates. He says things like "The LLM knows".

      He is not going to convince anybody to merge is PRs, since he is not even checking that the tests the LLM generates are correct. It's a joke.

      8 replies →

    •   function estimate_method_targets(func_name::Symbol, types::Tuple)
            # Conservative estimate
            # In a real implementation, we'd query the method table
            return 2  # Assume multiple possibilities
        end
      

      Hilarious. Was this model trained on XKCD [0] by any chance?

      [0]: https://xkcd.com/221/

      1 reply →

    • As an aside, he originally titled the thread "A complete guide to building static binaries with Julia (updated for 1.12)", with no mention of AI. That got me excited every time I opened the Discourse, until I remembered it was this slop. :/

      1 reply →

  • Maybe this guy is it: the actual worst coder in the world

    • Well that's the origin story for the main character on Solo Leveling, so...

      Actually, I probably shouldn't make this comment publicly. It could cause another 3-5 programmer-isekai anime series.

    • A question I only dare to ask myself in these times of LLM: Is this even a real human being or already an instance of an ‘agentic system’?

    • Lot of people are criticising this guy but we all benefit from having an example to show people - this, please don’t do what this guy is doing. Please read the generated code, understand it, edit it and then submit it.

      If anyone’s answer to “why does your PR do this” is “I don’t know, the AI did it and I didn’t question it” then they need a time out.

  • I guess we now have the equivalent of cowboy builders but for software now. Except no one asked for anything to be built in this case lol.

  • The people of Jonestown collectively drank less kool-aid than all this.

    I don't know whether to be worried or impressed.

  • I had $1000 in Claude credits and went to town.

    Yes, I made mistakes along the way.

    • The biggest mistake, AI or not, is dropping a 10K+ PR. 300~500 LOC is how far one should be going, unless they're doing some automated refactoring. E.g. formatting the entire StaticCompiler.jl source. This should've been a distinct PR, preferably by a maintainer.

      10 replies →

    • Please don't tell me you actually spent $1000 on generating fake tests....

    • It's truly astonishing to me that your account has existed since 2008 and you decided to pull this.

      As a troll job for the lulz it is some amazing work. Hats off

    • You've wasted other peoples time and mental energy with utter bullshit that you weren't even bothered to read yourself. Be more considerate in future.

    • This isn't just "making mistakes." It's so profoundly obnoxious that I can't imagine what you've actually been doing during your apparently 30 years of experience as a software developer, such that you somehow didn't understand, or don't, why submitting these PRs is completely unacceptable.

      The breezy "challenge me on this" and "it's just a proof of concept" remarks are infuriating. Pull requests are not conversation starters. They aren't for promoting something you think people should think about. The self-absorption and self-indulgence beggar belief.

      Your homepage repeatedly says you're open to work and want someone to hire you. I can't imagine anybody looking at those PRs or your behavior in the discussions and concluding that you'd be a good addition to a team.

      The cluelessness is mind-boggling.

      It's so bad that I'm inclined to wonder whether you really are human -- or whether you're someone's stealthy, dishonest LLM experiment.

My favorite of his https://x.com/joelreymont/status/1990981118783352952

> Claude discovered a bug in the Zig compiler and is in the process of fixing it!

...a few minutes later...

https://github.com/ziglang/zig/pull/25974

I can see a future job interview scenario:

- "What would you say is your biggest professional accomplishment, Joel?"

- "Well, I almost single-highhandedly drove Zig away from Github"

  • > Well, I almost single-highhandedly drove Zig away from Github

    If you think about it, Joel is net positive to Zig and its community!

  • Those overly enthusiastic responses from the LLM are really going to do a number on people's egos.

I'm not sure if this is advanced trolling at this point.

  • I'll one up you: at this point I'm becoming pretty sure that this is a person who actually hates LLMs, who is trying to poison the well by trying to give other people reasons to hate LLMs too.

    • I envy your optimism. The truth is that humans are generally stupider and more craven than you have apparently even begun to conceive.

  • Is the. AI bubble just biolliinaires larping about their favorite dystopuan scifi?

Ah. I remember that guy. Joel. He sold his poker server and bragged around HN long time ago. He is too much of PR stunt guy recently. Unfortunately AI does not lead to people being nice in the end. The way people abuse other people using AI is crazy. Kudos to ocaml owners giving him a proper f-off but polite response.

>MAJOR BREAKTHROUGH ACHIEVED

the bootlicking behavior must must be like crack for wannabes. jfc

>I did not write a single line of code but carefully shepherded AI over the course of several days and kept it on the straight and narrow.

>AI: I need to keep track of variables moving across registers. This is too hard, let’s go shopping… Me: Hey, don’t any no shortcuts!

>My work was just directing, shaping, cajoling and reviewing.

How people can say that without the slightest bit of reflection on whether they're right or just spitting BS

I agree that's a funny coincidence. But, what about the change it wanted to commit? It is at least slightly interesting. It is doubly interesting that changing line 638 neither breaks nor fixes any test.

That one was poorly documented and may have been related to an issue in my code.

I would offer this one instead.

https://github.com/joelreymont/zig/pull/1

  • Even after the public call-outs you keep dropping blatant ads for your blog and AI in general in your PRs; there's no other word for them than ads. This is why I blocked you on the OCaml forum already.

    When I was a kid, every year I'd get so obsessed about Christmas toys that the hype would fill my thoughts to the point I'd feel dizzy and throw up. I genuinely think you're going through the adult version of that: your guts might be ok but your mind is so filled with hype that you're losing self-awareness.