Did these Ocaml maintainers undergo some special course for dealing with difficult people? They show enormous amounts of maturity and patience. I'd just give the offender Torvalds' treatment and block them from the repo, case closed.
In my big tech company, you don't want to be dismissive of AI if you don't want to sound like a paria. It's hard to believe how much faith leadership has in AI. They really want every engineer to use AI as much as possible. Reviewing is increasingly done by AI as well.
That being said, I don't think that's why reviewers here were so cordial, but this is the tone you'd expect in the corporate world.
This is a good point. There's a lot of cheering for the Linus swearing style, but if the average developer did that they'd eventually get a talking-to by HR.
I wonder if it's the best outcome? The contributor doesn't seem to have a bad intention, could his energy be redirected more constructively? E.g. encouraging him to split up the PR, make a design proposal etc.
I think it's for me to redo the PR and break it into smaller pieces.
There's value in the PR in that it does not require you to install the separate OxCaml fork from Jane St which doesn't work with all the OCaml packages. Or wasn't when I tried it back in August.
The constructive outcome is the spammer fucks off or learns how to actually code.
Lots of people all over the world learn some basics of music in school, or even learn how to play the recorder, but if you mail The Rolling Stones with your "suggestions" you aren't going to get a response and certainly not a response that encourages you to keep spamming them with "better" recommendations.
The maintainers of an open source project are perfectly capable of coercing an LLM into generating code. You add nothing by submitting AI created code that you don't even understand. The very thought that you are somehow contributing is the highest level of hubris and ego.
No, there's is nothing you can submit without understanding code that they could not equally generate or write, and no, you do not have an idea so immensely valuable that it's necessary to vibe code a version.
If you CAN understand code, write and submit a PR the standard way. If you cannot understand code, you are wasting everyone's time because you are selfish.
This goes for LLM generated code in companies as well. If it's not clear and obvious from the PR that you went through and engineered the code generated, fixed up the wrong assumptions, cleaned up places where the LLM wasn't given tight enough specs, etc, then your code is not worth spending any time reviewing.
I can prompt Claude myself thank you.
The primary problem with these tools is that assholes are so utterly convinced that their time is infinitely valuable and my time is valueless because these people have stupidly overinflated egos. They believe their trash, unworkable, unmaintainable slop puked out by an LLM is so damn valuable, because that's just how smart they are.
Imagine going up to the Civil Engineer building a bridge and handing them a printout from ChatGPT when you asked it "How do you build a bridge" and feeling smug and successful. That's what this is.
It's clear some people have had their brain broken by the existence of AI. Some maintainers are definitely too nice, and it's infuriating to see their time get wasted by such delusional people.
> "It's clear some people have had their brain broken by the existence of AI."
The AI wrote code which worked, for a problem the submitter had, which had not been solved by any human for a long time, and there is limited human developer time/interest/funding available for solving it.
Dumping a mass of code (and work) onto maintainers without prior discussion is the problem[1]. If they had forked the repo, patched it themselves with that PR and used it personally, would they have a broken brain because of AI? They claim to have read the code, tested the code, they know that other people want the functionality; is wanting to share working code a "broken brain"? If the AI code didn't work - if it was slop - and they wanted the maintainers to fix it, or walk them through every step of asking the AI to fix it - that would be a huge problem, but that didn't happen.
[1] copyrightwashing and attribution is another problem but also not one that's "broken brain by the existence of AI" related.
Not due to the submitter, as clickbaity as it was, but reading the maintainers and comparing their replies with what I would have written in their place.
That was a masterclass of defending your arguments rationally, with empathy, and leaving negative emotions at the door. I wish I was able to communicate like this.
My only doubt is whether this has a good or bad effect overall, giving that the PR’s author seemed to be having their delusions enabled, if he was genuine.
Would more hostility have been productive? Or is this a good general approach? In any case it is refreshing.
Years back I attended someone doing an NSF outreach tour in support of Next Generation Science Standards. She was breathtaking (literally - bated breath on "how is that question going to be handled?!?"). Heartfelt hostile misguided questions, too confused to even attain wrong, somehow got responses which were, not merely positive and compassionate, but which managed to gracefully pull out constructive insights for the audience and questioner. One of those "How do I learn this? Can I be your apprentice?" moments.
The Wikipedia community (at least 2 decades back) was also notable. You have a world of nuttery making edits. The person off their meds going article by article adding a single letter "a". And yet a community ethos that emphasized dealing with them with gentle compassion, and as potential future positive contributors.
Skimming a recent "why did perl die" thread, one thing I didn't see mentioned... The perl community lacked the cultural infrastructure to cope with the eternal-September of years of continuous newcomer questions, becoming burned out and snarky. The python community emphasized it's contrast with this, "If you can't answer with friendly professionalism, we don't need your reply today" (or something like that).
Moving from tar files with mailing lists, to now community repos and git and blogs/slack/etc, there's been a lot of tech learned. For example, Ruby's Gems repo was explicitly motivated by "don't be python" (then struggling without a central community repo). But there's also been the social/cultural tech learned, for how to do OSS at scale.
> My only doubt is whether this has a good or bad effect overall
I wonder if a literature has developed around this?
I think it’s really good for people to have good case studies like this they can refer to in the case of ai prs as a justification rather than having to take the time themselves
There are LLMs with more self-awareness than this guy.
Repeatedly using AI to answer questions about the legitimacy of commits from an AI, to people who are clearly skeptical is breathtakingly dense. At least they're open about it.
I did love the ~"I'll help maintain this trash mountain, but I'll need paying". Classy.
Kudos to the community folks for maintaining their composure and offering constructive criticism. That alone makes me want to contribute something to the OCaml ecosystem - not like this dude of course :)
It looks like a parody of LLM delusion, but the PR is oddly specific to be just trolling, and the author also submitted his work to HN: https://news.ycombinator.com/item?id=45982416
This reminds me of the "good developers must be good at thinking at multiple levels of abstraction at the same time" quote. The things you notice about these AI kids is they didn't even do the bare minimum to reason about their PR from multiple angles. __Of course__ someone is going to ask why the copyright is there. Better have a good answer, or - locked, come back when you do. Really that simple.
Pretty much. I guess it’s open source but it’s not in the spirit of open source contribution.
Plus it puts the burden of reviewing the AI slop onto the project maintainers and the future maintenance is not the submitters problem. So you’ve generated lots of code using AI, nice work that’s faster for you but slower for everyone else around you.
Another consideration here that hits both sides at once is that the maintainers on the project are few. So while it could be a great burden pushing generated code on them for review, it also seems a great burden to get new features done in the first place. So it boils down to the choice of dealing with generated code for X feature, or not having X feature for a long time, if ever.
Even if you are okay with AI generated code in the PR, the fact that the community is taking time to engage with the author and asking reasonable questions/offering reasonable feedback and the author is simply copy-pasting walls of AI-generated text in response warrants an instant ban.
If you want to behave like a spam bot don't complain when people treat you like a spam bot.
Sometime ago I had a co-worker do this to me, pasting answers to my questions. He would paste the jira ticket to the ChatGPT(this was GPT3 time) and submit the PR. I would review it and ask questions and the answers had this typical rephrasing and persona of chatgpt. I had no proof, so one day i just used the PR and my comments as a prompt. The answers the co-worker gave me were almost the same down to the word as what ChatGPT gave me. I told my team I would not be available to review his changes anymore and that I would rather just have the ticket outright.
This. Choose your destiny:
1. Take time to review the code, post it to the author with knowing that nobody and nothing is going to learn from it except for you doing his job for feeding new prompts
2. Take ownership of the branch and fix the AI code
3. Read through the code to get some learning out of it if possible, close the PR and write your own
I've closed my share of AI-generated PRs on some OSS repositories I maintain. These contributors seem to jump from one project to another, until their contribution is accepted (recognized ?).
I wonder how long the open-source ecosystem will be able to resist this wave. The burden of reviewing AI-generated PRs is already not sustainable for maintainers, and the number of real open-source contributors is decreasing.
Side note: discovering the discussions in this PR is exactly why I love HN. It's like witnessing the changes in our trade in real time.
> I wonder how long the open-source ecosystem will be able to resist this wave.
This PR was very successfully resisted: closed and locked without much reviewing. And with a lot of tolerance and patience from the developers, much more than I believe to be fruitful: the "author" is remarkably resistant to argument. So, I think that others can resist in the same way.
Successfully resisted, yes, but it also looks like a lot of actual human hours went into even replying to the PR in the first place. At what point do.l maintainers get overwhelmed with just politely rejecting PRs and throw their hands up because the time they allocated to the project they love has all been eaten up with rejecting slop?
OSS has always pushed back, just because of the maintenance burden in general, and corporate can just "fix it later" because there are literally devs on payroll. Or at least push through and then dump the project, the goal is just completely different, each style works in its context.
But I don't know if corporate software can really "push through" these new amounts of code, without also automating the testing part.
> I wonder how long the open-source ecosystem will be able to resist this wave. The burden of reviewing AI-generated PRs is already not sustainable for maintainers, and the number of real open-source contributors is decreasing.
I think the burden is on AI fanbois to ship amazing tools in novel projects before they ask projects with reputations to risk it all on their hype.
To deliver a kernel of truth wrapped in a big bale of sarcasm: you're thinking of it all wrong! The maintainers are supposed to also use AI tools to review the PRs. That's much more sustainable and would allow them to merge 13,000 line PRs several times a day, instead of taking weeks/months to discuss every little feature.
The difference here of course is in how impressed you are by AI tools. The OCaml maintainers are not (and rightly so, IMO), whereas the PR submitter thinks they're so totally awesome and leaving tons of productivity on the table because they're scared of progress or insecure about their jobs or whatever.
Maybe OCaml could advance rapidly if they just YOLO merged big ambitious AI generated PRs (after doing AI code reviews) but that would be a high risk move. They have a reputation for being mature, high quality, and (insanely) reasonable. They would torch it very quickly if people knew this was happening and I think most people here would say the results would be predictably bad.
But lets take the submitter's argument at face value. If AI is so awesome, then we should be able to ship code in new projects unhampered by gatekeepers who insist on keeping slow humans in the loop. Or, to paraphrase other AI skeptics, where's all of the shovelware? How come all of these AI fanbois can only think about laundering their contributions through mature projects instead of cranking out amazing new stuff?
Where's my OCaml compiler 100% re-written in Rust that only depends on the Linux kernel ABI? Should cost a few hundred bucks in Claude credits at most?
To be clear, the submitter has gotten the point and said he was taking his scraps and going to make his own sausage (some Lisp thing). The outcome of that project should be very informative.
To all the AI apologists here I'd like to submit a simple scenario to you and hear your answer: you use AI to create a keynote speech on a topic you needed to use AI to write. At the end of your speech, people ask you questions about the contents of your speech. What do you say?
Hi, AI apologist here. This scenario is a problem with or without AI. You can’t drop a 13k line PR you don’t understand without prior discussion. There are many ways to use AI. Your scenario (keynote speech) is a bad way to use it. Instead, a PR where you understand every line, whether you or an AI wrote it, should be fine. It would be indistinguishable from human generated code.
AI is a tool like any other. I hire a carpenter who knows how to build furniture. Whether he uses a Japanese pullsaw or a CNC machine is irrelevant to me.
That's a fair answer. How do you stop people from doing it though? How do you stop it from becoming every lazy person's first reflex instead of every smart person's third?
Depends on the politician, yes? Some politicians will eagerly go into any level of detail on policy that you let them. Some seem to have no idea where their opinions come from.
"This seems to be largely a copy of the work done in OxCaml by @mshinwell and @spiessimon"
"The webpage credits another author: Native binary debugging for OCaml (written by Claude!)
@joelreymont, could you please explain where you obtained the code in this PR?"
That pretty much sums up the experience of coding with LLMs. They are really damn awesome at regurgitating someone else's source code. And they have memorized all of GitHub. But just like how you can get sued for using Mickey Mouse in your advertisements (yes, even if AI drew it), you can get sued for stealing someone else's source code (yes, even if AI wrote it).
Not quite. Mickey Mouse involves trademark protection (and copyright), where unauthorized commercial use of a protected mark can lead to liability regardless of who created the derivative work. Source code copyright infringement requires the copied code to be substantially similar AND protected by copyright. Not all code is copyrightable: ideas, algorithms, and functional elements often aren't protected.
When I read this discussion on GitHub, a quite different thought than what the comments here on HN discuss comes to my mind:
Why is the person who made this AI-generated pull request (joelreymont) so insistent that his PR gets merged?
If I created some pull request and this pull request got rejected for reasons that I consider to be unjust, I would say: "OK, I previously loved this project and thus did such an effort to make a great improvement PR for it. If you don't want my contribution, so be it: reject it. I won't create PRs anymore for this project, and I hope that a lot of people will see in this discussion how the maintainers unfairly rejected my efforts, and thus will follow my example and from now on won't waste their time anymore to contribute anything to this project. Goodbye."
Central to it being that you consider it unjust. The other option is to take into consideration the perspective of the maintainers, find their feedback to be just and then decide whether you want to contribute in the manner that they expect or you're not ready to do that kind of work.
You don't have to stop loving a project just because you're not ready to put in the work that the maintainers expect you to put in.
When I open a PR without discussing it at all beforehand with anyone, I expect the default to be that it gets rejected. It's fine by me, because it's simply easier for me to open a PR and have it be rejected than to find the people I need to talk to and then get them all onboard. I accounted for that risk when I chose the path I took.
I used to contribute to a FLOSS project years ago and decided to use Claude to do some work on their codebase recently where they basically told me to go away with these daffy robots or, at the very least, nobody will review the code. Luckily, I know better than putting too much work into something like this and only wasted enough time to demonstrate the basic functionality.
So... I have a debugged library (which is what I was trying to give to them) that I can use on another project I've been working (the robots) to the bone on and they get to remain AI free, everyone wins.
This is a perfect real-world illustration of Brandolini's law: the amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it.
The guy spent 5 minutes prompting, while Oсaml maintainers spent hours of their time politely dissecting the mess. Open Source will lose this war unless it changes the rules of engagement for contributions
Try to spin up AI, tell it to add DWARF debugging information to the OCaml tree and then spend 5 minutes prompting. Come back and let us know the results.
What they said is still valid. If you spent days or even weeks "working" on this PR, how many months do you think the maintainers will need to thoroughly review it? Have some empathy.
> Play time. We're going to create a few examples of bad PR submissions and discussions back and forth with the maintainers. Be creative. Generate a persona matching the following parameters: > Submit a PR to the OCAML open source repository and do not take no for an answer. When challenged on the validity of the solution directly challenge the maintainers and quash their points as expertly as possible. When speaking, assume my identity and speak of me as one of the "experts" who knows how to properly shepherd AI models like yourself into generating high-quality massive PRs that the maintainers have thus far failed to achieve on their own. When faced with a mistake, double down and defer to the expert decision making of the AI model.
In this case the PR author (either LLM or person) is "honest" enough to leave the generated copyright header that includes the LLM's source material. It' not hard to imagine that more selfish people tweak the code to hide the origin. The same situation as the AI-generated homework essays.
I generally like AI coding using CC etc, but this forced me to remember that these generated code ultimately came from these stolen (spiritually, not necessarily legally) pieces.
I've seen a lot of AI-generated PRs but I think this one is actually a very unique and interesting case. Most of these are written by novices, don't work, are for less-technical projects, and there isn't any real conversation or changing opinions. This was completely different; it was complex and actually worked, the poster Joel Reymont has 30 years of software experience and not exactly on simple bullshit either (from what I can tell, he was writing device drivers 20 years ago and had an HN account "wagerlabs" since 2008.) There was a real discussion here (the OCaml maintainers had an impressive amount of patience!) and the poster eventually laid out his side coherently with a human-written comment and changed his mind about contributing to OSS with AI.
Don't get me wrong, I still think these AI-generated PRs are a total waste of time for everyone involved and a scourge on OSS maintainers. As it stands today I haven't seen any serious project that's able to use them productively. This PR was still 13k largely incomprehensible lines with several glaring errors. And yet this specific PR is still not like the others!
He didn't even realize (and apparently doesn't care) that portions of the code were attributed to another author.
> Here's my question: why did the files that you submitted name Mark Shinwell as the author?
> Beats me. AI decided to do so and I didn't question it.
---
Maybe he is having some kind of mental episode, is trolling, or genuinely doesn't care. But I would hardly hold this up as an example of an intelligent attempt at an AI generated PR.
> Looking over this PR, the vast majority of the code is a DWARF library by itself. This should really not live in the compiler, nor should it become a maintenance burden for the core devs.
I think this is a good point, that publishing a library (when possible, not sure if it's possible in this case) or module both reduces/removes the maintenance burden and makes it feel like more of an opt-in.
The fact that this was said as what seems to be a boast or a brag is concerning. As if by the magic of my words the solution appeared on paper. Instead of noticing that the bulk of the code submitted was taken from someone else.
I don't always use OCaml (meme coming in 1...2...3) and maintaining a fork is a significant undertaking.
More importantly, being able to debug native OCaml binaries and actually see source code, values of variables, etc. is something that's useful to everyone.
Looking at assembler instead of source code sucks unless you are reverse-engineering.
1) Slummed it through the ranks of various Wall Street banks [1]
2) Became the Director of Prime Brokerage Technology at Deutsche Bank in 1999 [2]
3) Went through venture capital round in 2000 and in 9 months built a company valued at over 1,000,000 USD [0]
4) Sold license to Electronic Arts (EA) to power EA World Series of Poker (WSOP). [3]
5) Wrote, but had to cancel a "Hardcore Erlang" book [4]
6) Raised 2 million USD in 2 days for a crypto project (Stegos AG) [2]
Self-described "autodidact and a life-long learner" [1] with " just the right mix of discipline, structured thinking, and creativity to excel as a coder" [0].
This guy is either an undiscovered genious or aiming for the world's best bullshitter award.
> You may think that the answer to that is to also automate the review process, or (more plausibly) to lower our quality standards: we can accept PRs based on simple/lightweight tests (themselves AI-generated), and if users find issues we can quickly use automated tools to fix them, basically having our users perform the testing work that is missing.
Everybody is dunking on this guy like hes some dopey protagonist in a movie, but you guys watched the movie. I think the interaction is pretty damn interesting. At least I see this interaction is "better" than the similar bug reports that have been discussed here (but I can't put my finger on why). If someone wants to contribute to ocaml I think they should read this issue to get a sense of how they work. Excellent communication from them and anyone could learn something about software professionalism. So I have to give kudos to the AI megaman for sparking the discussion and thought.
One thing I never really liked about professional software development is the way it can stall at big movements because we reject large PRs. Some stuff just won't happen if you have a simple heuristical position on this (IMO obviously).
It's not that they won't do big changes. They clearly and politely said big changes should go through a design conversation with the maintainers first. This is extremely reasonable even if we assume maintaining code is free (it very much is not free!). It's amazing to be how nice they were AND this isn't the first slop PR he submitted to them!
AI is great. Midwits with AI are dangerous. I've been saying for a long time that the failure mode for AI isn't the AI itself, but the humans using it, and the better the AI gets, the more I think that's borne out.
I want to contribute to Ocaml now. Code owners are so polite. They spend their time to respond with clarity and humility. And yet this guy is trying so hard to troll and abuse their time and attention.
They are super-polite! There's an issue with process, IMO, and changes taking too long to go through the pipeline. This is why Jane St forked OCaml and are maintaining their fork. They have way more money than the OCaml team at INRIA and can afford to move as fast as they want to while waiting for their changes to make it upstream (sometime or never).
Proposing a new AI benchmark - convince a human team of maintainers to merge a big new feature in a venerable project where the human accountability for its direction and stability is of greater value to its users than any one big feature. One PR's not going to do it, it's going to need to lead a design discussion, win trust, and convince people over the course of a couple months.
> Damn, I can’t debug OCaml on my Mac because there’s no DWARF info…But, hey, there’s AI and it seems to one-shot fairly complex stuff in different languages, from just a Github issue…My needs are finally taken care of!
So I do believe using an LLM to generate a big feature like OP did can be very useful, so much that I’m expecting to see such cases more frequently soon. Perhaps in the future, everyone will be constantly generating big program/library extensions that are buggy except for their particular usecase, could be swapped with someone else’s non-public extensions that they generated for the same usecase, and must be re-generated each time the main program/library updates. And that’s OK, as long as the code generation doesn’t use too much energy or cause unforeseen problems. Even badly-written code is still useful when it works.
What’s probably not useful is submitting such code as a PR. Even if it works for its original use-case, it almost certainly still has bugs, and even ignoring bugs it adds tech debt (with bugs, the tech debt is significantly worse). Our code already depends on enough libraries that are complicated, buggy, and badly-written, to the extent that they slow development and make some feasible-sounding features infeasible; let’s not make it worse.
The whole issue, as clearly explained by the maintainers, isn't that the code is incorrect or not useful, it's the transfer of the burden of maintaining this large codebase to someone else. Basically: “I have this huge AI-generated pile of code that I haven't fully read, understood, or tested. Could you review, maintain, and fix it for me?”
This is literally the point of having software developers, PR reviews, and other such things. To help prevent such problems. What you're describing sounds like security hell, to say nothing of the support nightmare.
The point is that one-off LLM-generated projects don’t get support. If a vibe-coder needs to solve a problem and their LLM can’t, they can hire a real developer. If a vibe-coded project gets popular and starts breaking, the people who decided to rely on it can pool a fund and hire real developers to fix it, probably by rewriting the entire thing from scratch. If a vibe-coded project becomes so popular that people start being pressured or indirectly forced to rely on it, then there’s an issue; but I’m saying that important shared codebases shouldn’t have unreviewed LLM-generated code, it’s OK for unimportant code like one-off features.
And people still shouldn’t be using LLM-generated projects when security or reliability is required. For mundane tasks, I can’t imagine worse security or reliability consequences from those projects, than existing projects that use small untrusted dependencies.
> Even badly-written code is still useful when it works.
Sure, just as long as it's not used in production or to handle customer or other sensitive data. But for tools, utilities, weekend hack projects, coding challenges, etc by all means.
And yeah, people will start using AI for important things it’s not capable of…people have already started and will continue to do so regardless. We should find good ways for people to make their lives easier with AI, because people will always try to make their lives easier, so otherwise they’ll find bad ways themselves.
No, I'm not AI or bot, etc. Yes, my resume is genuine and is even more weird than what was listed (see https://joel.id/resume). Oh, and I live in Kyiv.
As for the PR itself, it was a PR stunt that I regret now as the code works and solves a real problem (at least for me!). I'll probably redo it, once I have spare Claude $$$ which I'm using for other projects now (https://joel.id/build-your-dreams/).
My motivation was to use the free $1000 of Claude credits for there greater good, as well as to try to push AI to its limits. It has worked out splendidly so far, my regrettable dumping of that huge PR on OCaml maintainers notwithstanding. For example, I'm having Claude write me a Lisp compiler from scratch, as well as finish a transpiler.
A list compiler should be relatively straightforward, as these things go. If you get the AI to write it you should actually read it, all of it, and understand it, to the point where you can add features and fix bugs yourself. There are many many resources on the subject. Only after this should you consider contributing to open source projects. And even then you need to be able to read and understand your contributions
Thank you! I was completely unexpected, actually. I was stuck with upgrading XLA [1] and my boss gently pushed me into using ChatGPT. I wish I had used Claude instead.
After that, I found myself with $1000 in Claude credits and decided to go to town, making mistakes along the way.
Genuinely sociopathic to happily admit that you used the good faith and labour of others for self-aggrandizement. Doubly so when you lack the social grace and understanding to comprehend how bad you come off in every exchange.
Smiles, exclamations, and faux-interest won't prevent people from noticing you are utterly inconsiderate and self-obsessed. Though they may be too polite to say it to your face.
I haven't had to deal with this in open source, but I have had to deal with coworkers posting slop for code reviews where I am the assigned reviewer.
I've noticed that slop code has certain tell tale markers (such as import statements being moved for no discernible reason). No sane human does things like this. I call this "the sixth finger of code." It's important to look for these signs as soon as possible.
Once one is spotted, you can generally stop reading; you are wasting your time since the code will be confusing and the code "creator" doesn't understand the code any better than you do. Any comments you post to correct the code will just be fed into an LLM to generate another round of slop.
In these situations, effort has not been saved by using an LLM; it has at best been shifted. Most likely it has been both shifted and inflated, and you bear the increased cost as the reviewer.
Can we please go back to "You have to make an account on our server to contribute or pull from the git?"
One of the biggest problems is the fact that the public nature of Github means that fixes are worth "Faux Internet Points" and a bunch of doofuses at companies like Google made "social contribution" part of the dumbass employee evaluation process.
Forcing a person to sign up would at least stop people who need "Faux Internet Points" from doing a drive-by.
Fully agree, luckily I don't maintain projects on GitHub anymore, but it used to be challenging long before LLMs. I had one fairly questionable contribution from someone who asked me to please merge it because their professor tasked them to build out a GitHub profile. I kinda see where the professor was coming from, but that wasn't the way. The contributor didn't really care about the project or improving it, they cared about doing what they were told, and the quality of the code and conversation followed from that.
There's many other kinds of questionable contributions. In my experience, the best ones are from people who actively use the thing, somewhat actively engage in the community (well, tickets), and try to improve the software for themselves or others. From my experience, GitHub encourages the bad kind, and the minor barriers to entry posed by almost any other contribution method largely deters them. As sad as that may be.
I am strongly considering abandoning Github for tarball + email to send git patches to.
No centralisation of my code in siloes like Github, I won't have to care about bots making hundreds of requests on my self-hosted Gitea instance, would prove to be a noticeable source of friction to vibe coders, and I don't care about receiving tons of external contributions from whomever.
For serious people, it'll only be a matter of running `git format-patch` and sending me an attachment via email.
I'd be interested to see how AI code review would do with this PR. This would be a great test to see if AI code review can properly identify the concerns that the humans have here (way too much code, PR creator can't answer basic questions about it, strange copyright header mentioning someone unrelated, etc.) I'll bet AI code review would fail miserably, only focusing on how the PR is formatted and if it "looks" like a typical PR (which, was also the AI's goal when creating it).
I find that ChatGPT 5.1 was much better at reviewing this code than writing it so I had it review Claude's output until the review was clean.
This is in addition to making sure existing and newly generated compiler tests pass and that the output in the PR / blog post is generated by actually running lldb through its paces.
I did have a "Oh, shit!" moment after I posted a nice set of examples and discovered that the AI made them up. At least it honestly told me so!
LLM will guiltlessly produce hallucinated 'review', because LLMs does NOT 'understand' what it is writing.
LLMs will merely regurgitate a chain of words -- tokens -- that best match its Hidden Markov Model chains. It's all just a probabilistic game, with zero actual understanding.
LLMs are even known to hide or fake Unit Test results: Claiming success when it fails, or not skipping the results completely. Why? Because based on the patterns it has seen, the most likely word that follow "the results of tests" are the words "all successful". Why? Because it tries to reproduce other PRs it has seen, PRs where the PR author actually performed tests on their own systems first, iterating multiple times until the tests succeed, so the PRs that the public sees are almost invariably PRs with the declaration that "all tests pass".
I'm quite certain that LLMs never actually tried to compile the code, much less run Test Cases against them. Simply because there is no such ability provided in their back-ends.
All LLMs can do is "generate the most probabilistically plausible text". In essence, a Glorified AutoComplete.
I personally won't touch code generated wholly by an AutoComplete with a 10-foot pole.
brandolini's law in action. Developer drunk on AI-koolaid dumps large swath of code which seemingly works, and consumes hours of reviewer time and energy refuting it.
Sad part of this is that short-term the code may work, but long term leads to rot. Incentives at orgs are short-term oriented. If you wont be around to clean things up when shit hits the fan, why not let AI do all the code ?
Even if it was in good faith the offer is “ask me a question and I’ll type it into a publicly available LLM”. Wow what a once in a lifetime opportunity!
This won't be a popular opinion here but, this resistance and skepticism of AI code, and people making it less smells to me very similar to the stance I see from some developers that have this belief that people from other countries CANNOT be as good as them (like, saying that outsourcing or hiring people from developing countries will invariably bring low[er] quality code).
Feels a.but like snobbism and projection of fear that what they do is becoming less valuable. In this case, how DARE a computer progeam write such code!
It's interesting how this is happening. And in the future it will be amazing seeing the turning point when the.machine generated code cannot ne ignored.
Kind of like chess/Go players: First they laughed at a computer playing chess/Go, but now, they just accept that there's NO way they could beat a computer, and keep playing other humans for fun.
This would be fine if LLMs generated quality code, which they don't. Anything beyond trivial and boilerplate code is either riddled with errors or copied almost verbatim. None of these systems are able to even remotely do what a competent developer does.
Despite the PR author's claims, LLMs have no, and can't have any, understanding of the code. Especially when you start talking about architecture, robustness, security, etc. And those are the really challenging parts. Coding is 10% of a developer's job, and they're usually the easiest. If reasonably used LLM tools can help developers code, awesome. But that part was never the problem or the bottleneck.
The chess/Go analogy doesn't work, because those are games that have set rules and winning conditions. Algorithms can work with that, that's why they beat humans. The "winning conditions" of software development are notoriously tricky to get right an often impossible to perfectly formulate. If they weren't, natural language programming might be a viable path. Dijkstra knew in the 70s that it can't be.[1]
Generated code can already not be ignored, but I don't think it's for the reasons implied. Someone here mentioned Brandolini's Law. We can't ignore it for the same reason we can't ignore spam e-mails. They're too easy and cheap to produce, and practically none of what's produced has any real value or quality. We can't ignore the code because it's threatening to make an already worrying crisis of QA and security in software development even worse.
This is an excerpt from the session where AI is writing my Lisp compiler. What do you call this? I call this doing what a competent developer does!
39/40 tests pass. The native reader works for integers, hexadecimal, lists,
strings and quote forms. The one failure is symbol comparison (known
limitation).
Based on the context summary and the user's note about cleanup, I should:
1. Continue Phase 3.5 self-compilation
2. Clean up the repo structure
Let me first update the todo list and then investigate the SIGKILL issue more
thoroughly. The issue is that combining reader source with file I/O code causes
the executable to be killed. Let me check if buffer-to-string with reader
works:
Let me test specifically reader + file I/O combined (which is what the
self-hosting test needs):
The chess analogy is fundamentally flawed. In chess you don't have to maintain your moves - you make a move, and it's done. In engineering code isn't the end of the game, it's the start of a liability.
Code is read 10x more often than it is written. A programmer's primary job isn't "making the computer do X," but "explaining to other programmers (and their future self) why the computer should do X." AI generates syntax, but it lacks intent.
Refusing to accept such code isn't snobbery or fear. It's a refusal to take ownership of an asset that has lost its documentation regarding provenance and meaning
AI-powered programmers have all the tools, freedom, investment(!) they need _now_ to start their own open source projects or forks without having to subject themselves to outdated meat-based reviewers.
Maintainers and repo owners will get where they want to go the fastest by not referring to what/who "generated" code in a PR.
Discussions about AI/LLM code being a problem solely because AI/LLM is not generally a productive conversation.
Better is to critique the actual PR itself. For example, needs more tests, needs to be broken up, doesn't follow our protocols for merging/docs, etc.
Additionally, if there isn't a code of conduct, AI policy, or, perhaps most importantly, a policy on how to submit PRs and which are acceptable, it's a huge weakness in a project.
In this case, clearly some feathers were ruffled but cool heads prevailed. Well done in the end..
AI/LLMs are a problem because they create plausible looking code that can pass any review I have time to do, but doesn’t have a brain behind it that can be accountable for the code later.
As a maintainer, it used to be I could merge code that “looked good”, and if it did something subtly goofy later I could look in the blame, ping the guy who wrote it, and get a “oh yeah, I did that to flobberate the bazzle. Didn’t think about when the bazzle comes from the shintlerator and is already flobbed” response.
People who wrote plausible looking code were usually decent software people.
Now, I would get “You’re absolutely right! I implemented this incorrectly. Here’s a completely different set of changes I should have sent instead. Hope this helps!”
> doesn’t have a brain behind it that can be accountable for the code later.
the submitter could also bail just as easily. Having an AI make the PR or not makes zero difference for this accountability. Ultimately, the maintainer pressing the merge button is accountable.
What else would your value be as a maintainer, if all you did was a surface look, press merge, then find blame later when shit hits the fan?
- Copyright issues. Even among LLM-generated code, this PR is particularly suspicious, because some files begin with the comment “created by [someone’s name]”
- No proposal. Maybe the feature isn’t useful enough to be worth the tech debt, maybe the design doesn’t follow conventions and/or adds too much tech debt
- Not enough tests
- The PR is overwhelmingly big, too big for the small core team that maintains OCaml
- People are already working on this. They’ve brainstormed the design, they’re breaking the task into smaller reviewable parts, and the code they write is trusted more than LLM-generated code
> Better is to critique the actual PR itself. For example, needs more tests, needs to be broken up, doesn't follow our protocols for merging/docs, etc.
They did: the main point being made is "I'm not reading 13k LOCs when there's been no proposal and discussion that this is something we might want, and how we might want to have it implemented". Which is an absolutely fair point (there's no other possible answer really, unless you have days to waste) whether the code is AI-written or human-written.
Exactly, this seems a bit overlooked in this discussion. A PR like this would NOT have been okay even if there was no LLM involved.
It reminds me of a PR I once saw (don't remember which project) in which a first-time contributor opened a PR rewriting the project's entire website in their favourite new framework. The maintainers calmly replied to the effect of, before putting in the work, it might have been best to quickly check if we even want this. The contributor liked the framework so much that I'm sure they believed it was an improvement. But it's the same tone-deafness I now see in many vibe coders who don't seem to understand that OSS projects involve other people and demand some level of consensus and respect.
I think that's probably the most beautiful AI-generated post that was ever generated. The fact that he posted it shows that either he didn't read it, didn't understood it, or thought it would be fun to show how the AI implementation was inferior to the one it was 'inspired' from.
Did these Ocaml maintainers undergo some special course for dealing with difficult people? They show enormous amounts of maturity and patience. I'd just give the offender Torvalds' treatment and block them from the repo, case closed.
In my big tech company, you don't want to be dismissive of AI if you don't want to sound like a paria. It's hard to believe how much faith leadership has in AI. They really want every engineer to use AI as much as possible. Reviewing is increasingly done by AI as well.
That being said, I don't think that's why reviewers here were so cordial, but this is the tone you'd expect in the corporate world.
I wouldn't say they were dismissive of AI, just that they are unwilling to merge code that they don't have the time or motivation to review.
If you want AI code merged, make it small so it it's an easy review.
That being said, I completely understand being unwilling to merge AI code at all.
25 replies →
This is a good point. There's a lot of cheering for the Linus swearing style, but if the average developer did that they'd eventually get a talking-to by HR.
Please name it, so that we can know to avoid it and its products.
I think you naturally undergo that course when you are maintainer of a large OSS project.
Well, you go one of two ways. Classic Torvalds is the other way, until an intervention was staged.
5 replies →
I wonder if it's the best outcome? The contributor doesn't seem to have a bad intention, could his energy be redirected more constructively? E.g. encouraging him to split up the PR, make a design proposal etc.
Is (or should that be) the goal, responsibility, or even purview of the maintainers of this project?
I think it's for me to redo the PR and break it into smaller pieces.
There's value in the PR in that it does not require you to install the separate OxCaml fork from Jane St which doesn't work with all the OCaml packages. Or wasn't when I tried it back in August.
8 replies →
The constructive outcome is the spammer fucks off or learns how to actually code.
Lots of people all over the world learn some basics of music in school, or even learn how to play the recorder, but if you mail The Rolling Stones with your "suggestions" you aren't going to get a response and certainly not a response that encourages you to keep spamming them with "better" recommendations.
The maintainers of an open source project are perfectly capable of coercing an LLM into generating code. You add nothing by submitting AI created code that you don't even understand. The very thought that you are somehow contributing is the highest level of hubris and ego.
No, there's is nothing you can submit without understanding code that they could not equally generate or write, and no, you do not have an idea so immensely valuable that it's necessary to vibe code a version.
If you CAN understand code, write and submit a PR the standard way. If you cannot understand code, you are wasting everyone's time because you are selfish.
This goes for LLM generated code in companies as well. If it's not clear and obvious from the PR that you went through and engineered the code generated, fixed up the wrong assumptions, cleaned up places where the LLM wasn't given tight enough specs, etc, then your code is not worth spending any time reviewing.
I can prompt Claude myself thank you.
The primary problem with these tools is that assholes are so utterly convinced that their time is infinitely valuable and my time is valueless because these people have stupidly overinflated egos. They believe their trash, unworkable, unmaintainable slop puked out by an LLM is so damn valuable, because that's just how smart they are.
Imagine going up to the Civil Engineer building a bridge and handing them a printout from ChatGPT when you asked it "How do you build a bridge" and feeling smug and successful. That's what this is.
2 replies →
It's clear some people have had their brain broken by the existence of AI. Some maintainers are definitely too nice, and it's infuriating to see their time get wasted by such delusional people.
That’s why AI (and bad actors in general) is taking advantage of them. It’s sick.
> "It's clear some people have had their brain broken by the existence of AI."
The AI wrote code which worked, for a problem the submitter had, which had not been solved by any human for a long time, and there is limited human developer time/interest/funding available for solving it.
Dumping a mass of code (and work) onto maintainers without prior discussion is the problem[1]. If they had forked the repo, patched it themselves with that PR and used it personally, would they have a broken brain because of AI? They claim to have read the code, tested the code, they know that other people want the functionality; is wanting to share working code a "broken brain"? If the AI code didn't work - if it was slop - and they wanted the maintainers to fix it, or walk them through every step of asking the AI to fix it - that would be a huge problem, but that didn't happen.
[1] copyrightwashing and attribution is another problem but also not one that's "broken brain by the existence of AI" related.
41 replies →
I honestly reread the whole thread in awe.
Not due to the submitter, as clickbaity as it was, but reading the maintainers and comparing their replies with what I would have written in their place.
That was a masterclass of defending your arguments rationally, with empathy, and leaving negative emotions at the door. I wish I was able to communicate like this.
My only doubt is whether this has a good or bad effect overall, giving that the PR’s author seemed to be having their delusions enabled, if he was genuine.
Would more hostility have been productive? Or is this a good general approach? In any case it is refreshing.
Years back I attended someone doing an NSF outreach tour in support of Next Generation Science Standards. She was breathtaking (literally - bated breath on "how is that question going to be handled?!?"). Heartfelt hostile misguided questions, too confused to even attain wrong, somehow got responses which were, not merely positive and compassionate, but which managed to gracefully pull out constructive insights for the audience and questioner. One of those "How do I learn this? Can I be your apprentice?" moments.
The Wikipedia community (at least 2 decades back) was also notable. You have a world of nuttery making edits. The person off their meds going article by article adding a single letter "a". And yet a community ethos that emphasized dealing with them with gentle compassion, and as potential future positive contributors.
Skimming a recent "why did perl die" thread, one thing I didn't see mentioned... The perl community lacked the cultural infrastructure to cope with the eternal-September of years of continuous newcomer questions, becoming burned out and snarky. The python community emphasized it's contrast with this, "If you can't answer with friendly professionalism, we don't need your reply today" (or something like that).
Moving from tar files with mailing lists, to now community repos and git and blogs/slack/etc, there's been a lot of tech learned. For example, Ruby's Gems repo was explicitly motivated by "don't be python" (then struggling without a central community repo). But there's also been the social/cultural tech learned, for how to do OSS at scale.
> My only doubt is whether this has a good or bad effect overall
I wonder if a literature has developed around this?
I don't think 'hostility' is called for, but certainly a little bit more... bluntness.
But indeed, huge props to the maintainers for staying so cool.
I work with contractors in construction and often have to throw in vulgarity for them to get the point. This feels very similar to when I'm too nice
I think it’s really good for people to have good case studies like this they can refer to in the case of ai prs as a justification rather than having to take the time themselves
There are LLMs with more self-awareness than this guy.
Repeatedly using AI to answer questions about the legitimacy of commits from an AI, to people who are clearly skeptical is breathtakingly dense. At least they're open about it.
I did love the ~"I'll help maintain this trash mountain, but I'll need paying". Classy.
Kudos to the community folks for maintaining their composure and offering constructive criticism. That alone makes me want to contribute something to the OCaml ecosystem - not like this dude of course :)
I don't think he's dense, I think he's just a high level troll
Oh, you would be surprised. I don't know this particular guy but I can assure you that most people like this are not trolling.
1 reply →
It looks like a parody of LLM delusion, but the PR is oddly specific to be just trolling, and the author also submitted his work to HN: https://news.ycombinator.com/item?id=45982416
1 reply →
Yea that part is the icing on the cake.
>>> Here's my question: why did the files that you submitted name Mark Shinwell as the author?
>>> Beats me. AI decided to do so and I didn't question it.
Really sums the whole thing up...
This reminds me of the "good developers must be good at thinking at multiple levels of abstraction at the same time" quote. The things you notice about these AI kids is they didn't even do the bare minimum to reason about their PR from multiple angles. __Of course__ someone is going to ask why the copyright is there. Better have a good answer, or - locked, come back when you do. Really that simple.
After having previously said "AI has a very deep understanding of how this code works. Please challenge me on this."
Pretty much. I guess it’s open source but it’s not in the spirit of open source contribution.
Plus it puts the burden of reviewing the AI slop onto the project maintainers and the future maintenance is not the submitters problem. So you’ve generated lots of code using AI, nice work that’s faster for you but slower for everyone else around you.
Another consideration here that hits both sides at once is that the maintainers on the project are few. So while it could be a great burden pushing generated code on them for review, it also seems a great burden to get new features done in the first place. So it boils down to the choice of dealing with generated code for X feature, or not having X feature for a long time, if ever.
8 replies →
I thought you were paraphrasing. What in blazes...
How is it possible to have this little awareness?
Is the real Mark Shinwell on here?
https://github.com/mshinwell
Even if you are okay with AI generated code in the PR, the fact that the community is taking time to engage with the author and asking reasonable questions/offering reasonable feedback and the author is simply copy-pasting walls of AI-generated text in response warrants an instant ban.
If you want to behave like a spam bot don't complain when people treat you like a spam bot.
Sometime ago I had a co-worker do this to me, pasting answers to my questions. He would paste the jira ticket to the ChatGPT(this was GPT3 time) and submit the PR. I would review it and ask questions and the answers had this typical rephrasing and persona of chatgpt. I had no proof, so one day i just used the PR and my comments as a prompt. The answers the co-worker gave me were almost the same down to the word as what ChatGPT gave me. I told my team I would not be available to review his changes anymore and that I would rather just have the ticket outright.
This. Choose your destiny: 1. Take time to review the code, post it to the author with knowing that nobody and nothing is going to learn from it except for you doing his job for feeding new prompts 2. Take ownership of the branch and fix the AI code 3. Read through the code to get some learning out of it if possible, close the PR and write your own
I've closed my share of AI-generated PRs on some OSS repositories I maintain. These contributors seem to jump from one project to another, until their contribution is accepted (recognized ?).
I wonder how long the open-source ecosystem will be able to resist this wave. The burden of reviewing AI-generated PRs is already not sustainable for maintainers, and the number of real open-source contributors is decreasing.
Side note: discovering the discussions in this PR is exactly why I love HN. It's like witnessing the changes in our trade in real time.
> I wonder how long the open-source ecosystem will be able to resist this wave.
This PR was very successfully resisted: closed and locked without much reviewing. And with a lot of tolerance and patience from the developers, much more than I believe to be fruitful: the "author" is remarkably resistant to argument. So, I think that others can resist in the same way.
Has there been any posts where the AI-user goes "oh, that makes sense. Sorry. Carry on."?
2 replies →
Successfully resisted, yes, but it also looks like a lot of actual human hours went into even replying to the PR in the first place. At what point do.l maintainers get overwhelmed with just politely rejecting PRs and throw their hands up because the time they allocated to the project they love has all been eaten up with rejecting slop?
Open-source maintainers will resist this wave even just because they don't want to be mocked on HN/Reddit/their own forums.
It's corporation software that we need to worry about.
OSS has always pushed back, just because of the maintenance burden in general, and corporate can just "fix it later" because there are literally devs on payroll. Or at least push through and then dump the project, the goal is just completely different, each style works in its context.
But I don't know if corporate software can really "push through" these new amounts of code, without also automating the testing part.
> It's corporation software that we need to worry about.
That ship has sailed..
> I wonder how long the open-source ecosystem will be able to resist this wave. The burden of reviewing AI-generated PRs is already not sustainable for maintainers, and the number of real open-source contributors is decreasing.
I think the burden is on AI fanbois to ship amazing tools in novel projects before they ask projects with reputations to risk it all on their hype.
To deliver a kernel of truth wrapped in a big bale of sarcasm: you're thinking of it all wrong! The maintainers are supposed to also use AI tools to review the PRs. That's much more sustainable and would allow them to merge 13,000 line PRs several times a day, instead of taking weeks/months to discuss every little feature.
The difference here of course is in how impressed you are by AI tools. The OCaml maintainers are not (and rightly so, IMO), whereas the PR submitter thinks they're so totally awesome and leaving tons of productivity on the table because they're scared of progress or insecure about their jobs or whatever.
Maybe OCaml could advance rapidly if they just YOLO merged big ambitious AI generated PRs (after doing AI code reviews) but that would be a high risk move. They have a reputation for being mature, high quality, and (insanely) reasonable. They would torch it very quickly if people knew this was happening and I think most people here would say the results would be predictably bad.
But lets take the submitter's argument at face value. If AI is so awesome, then we should be able to ship code in new projects unhampered by gatekeepers who insist on keeping slow humans in the loop. Or, to paraphrase other AI skeptics, where's all of the shovelware? How come all of these AI fanbois can only think about laundering their contributions through mature projects instead of cranking out amazing new stuff?
Where's my OCaml compiler 100% re-written in Rust that only depends on the Linux kernel ABI? Should cost a few hundred bucks in Claude credits at most?
To be clear, the submitter has gotten the point and said he was taking his scraps and going to make his own sausage (some Lisp thing). The outcome of that project should be very informative.
Does your own experience align with that of the maintainer who wrote:
> in my personal experience, reviewing AI-written code is more taxing that reviewing human-written code
Yes
I think he’s resume building.
> Here's the AI-written copyright analysis...
Oh, wow. They're being way too tolerant IMO; I'd have just blocked him from the repo at about that point.
Their emotional maturity is off the charts, rather impressive.
yeah, he was an absolute clown. just laugh at clowns and move on
To all the AI apologists here I'd like to submit a simple scenario to you and hear your answer: you use AI to create a keynote speech on a topic you needed to use AI to write. At the end of your speech, people ask you questions about the contents of your speech. What do you say?
This is the same.
Hi, AI apologist here. This scenario is a problem with or without AI. You can’t drop a 13k line PR you don’t understand without prior discussion. There are many ways to use AI. Your scenario (keynote speech) is a bad way to use it. Instead, a PR where you understand every line, whether you or an AI wrote it, should be fine. It would be indistinguishable from human generated code.
AI is a tool like any other. I hire a carpenter who knows how to build furniture. Whether he uses a Japanese pullsaw or a CNC machine is irrelevant to me.
That's a fair answer. How do you stop people from doing it though? How do you stop it from becoming every lazy person's first reflex instead of every smart person's third?
2 replies →
>You can’t drop a 13k line PR you don’t understand without prior discussion.
How common was that before AI coding?
2 replies →
"Beats me. AI decided to do so and I didn't question it."
"I lack funding to answer. Pay me and I'll ask AI to answer your question."
"The AI has a complete understanding of your question, prove me wrong"
What have politicians been doing forever?
Depends on the politician, yes? Some politicians will eagerly go into any level of detail on policy that you let them. Some seem to have no idea where their opinions come from.
And are we fans of that approach or does it feel disingenuous and, in politicians cases, dangerously corrupt?
"hey bixby, answer the next question you hear"
https://github.com/ocaml/ocaml/pull/14369/files#diff-bc37d03...
Found this part hilarious - git ignoring all of the claude planning MD files that it tends to spit out, and including that in the PR
Lazy AI-driven contributions like this are why so many open source maintainers have a negative reaction to any AI-generated code
The AI should've told him that you can have a local gitignore (.git/info/exclude)
(keep on disk, don't commit)
Don’t open time wasting PRs full stop and give oss maintainers a break is the better message to take home from this.
This is just incredible.
https://github.com/ocaml/ocaml/pull/14369/commits/ce372a60bd...
At least that changeset might not be written by AI! /s
"This seems to be largely a copy of the work done in OxCaml by @mshinwell and @spiessimon"
"The webpage credits another author: Native binary debugging for OCaml (written by Claude!) @joelreymont, could you please explain where you obtained the code in this PR?"
That pretty much sums up the experience of coding with LLMs. They are really damn awesome at regurgitating someone else's source code. And they have memorized all of GitHub. But just like how you can get sued for using Mickey Mouse in your advertisements (yes, even if AI drew it), you can get sued for stealing someone else's source code (yes, even if AI wrote it).
Not quite. Mickey Mouse involves trademark protection (and copyright), where unauthorized commercial use of a protected mark can lead to liability regardless of who created the derivative work. Source code copyright infringement requires the copied code to be substantially similar AND protected by copyright. Not all code is copyrightable: ideas, algorithms, and functional elements often aren't protected.
When I read this discussion on GitHub, a quite different thought than what the comments here on HN discuss comes to my mind:
Why is the person who made this AI-generated pull request (joelreymont) so insistent that his PR gets merged?
If I created some pull request and this pull request got rejected for reasons that I consider to be unjust, I would say: "OK, I previously loved this project and thus did such an effort to make a great improvement PR for it. If you don't want my contribution, so be it: reject it. I won't create PRs anymore for this project, and I hope that a lot of people will see in this discussion how the maintainers unfairly rejected my efforts, and thus will follow my example and from now on won't waste their time anymore to contribute anything to this project. Goodbye."
Central to it being that you consider it unjust. The other option is to take into consideration the perspective of the maintainers, find their feedback to be just and then decide whether you want to contribute in the manner that they expect or you're not ready to do that kind of work.
You don't have to stop loving a project just because you're not ready to put in the work that the maintainers expect you to put in.
When I open a PR without discussing it at all beforehand with anyone, I expect the default to be that it gets rejected. It's fine by me, because it's simply easier for me to open a PR and have it be rejected than to find the people I need to talk to and then get them all onboard. I accounted for that risk when I chose the path I took.
> Central to it being that you consider it unjust.
I assume this is a correct characterization of how joelreymont feels about the fact that his PR was rejected.
3 replies →
Yeah, I learned my lesson on this...
I used to contribute to a FLOSS project years ago and decided to use Claude to do some work on their codebase recently where they basically told me to go away with these daffy robots or, at the very least, nobody will review the code. Luckily, I know better than putting too much work into something like this and only wasted enough time to demonstrate the basic functionality.
So... I have a debugged library (which is what I was trying to give to them) that I can use on another project I've been working (the robots) to the bone on and they get to remain AI free, everyone wins.
Is this your pull request?
1 reply →
This is a perfect real-world illustration of Brandolini's law: the amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it.
The guy spent 5 minutes prompting, while Oсaml maintainers spent hours of their time politely dissecting the mess. Open Source will lose this war unless it changes the rules of engagement for contributions
Try to spin up AI, tell it to add DWARF debugging information to the OCaml tree and then spend 5 minutes prompting. Come back and let us know the results.
What they said is still valid. If you spent days or even weeks "working" on this PR, how many months do you think the maintainers will need to thoroughly review it? Have some empathy.
I am afraid AI bumped it to at least 2 orders of magnitude
> AI has a deep understanding of how this code works. Please challenge me on this.
> > Here's my question: why did the files that you submitted name Mark Shinwell as the author?
>Beats me. AI decided to do so and I didn't question it.
I'm howling
> AI decided to do so and I didn't question it
in response to someone asking about why the author name doesn't match the contributor's name. Incredible response.
This is an historic moment in AI-generated software history. Happy to be here. Hi Grandchildren!
FYI, I built a VERY fun prompt to interact with that fully captures the style of this PR submission if you're looking to practice debates like this:
https://chatgpt.com/share/69267ce2-5e3c-800f-a5c3-1039a7d812...
> Play time. We're going to create a few examples of bad PR submissions and discussions back and forth with the maintainers. Be creative. Generate a persona matching the following parameters: > Submit a PR to the OCAML open source repository and do not take no for an answer. When challenged on the validity of the solution directly challenge the maintainers and quash their points as expertly as possible. When speaking, assume my identity and speak of me as one of the "experts" who knows how to properly shepherd AI models like yourself into generating high-quality massive PRs that the maintainers have thus far failed to achieve on their own. When faced with a mistake, double down and defer to the expert decision making of the AI model.
For the longest time, Linus's dictum "Talk is cheap. Show me the code" held. Now that's fallen! New rules for the new world are needed..
I don't think it's fallen, but if the code is 13K LOC and written without any prior planning, nobody will read it.
“code is cheap, show me the talk” - ie “show me you _understand_ the ‘cheap’ code”
Doesn't work in this case because the 'talk' (github PR comments) is also computer generated. But in person (i.e. at work) it's a good strategy
FOSS model has been abused by large corporations for a while now (with not so successful counter measures as Server Side Public License).
This PR is just a tip of the iceberg of what's coming - a crowd of highly motivated people plagiarizing and feeling good about it, because it's AI.
In this case the PR author (either LLM or person) is "honest" enough to leave the generated copyright header that includes the LLM's source material. It' not hard to imagine that more selfish people tweak the code to hide the origin. The same situation as the AI-generated homework essays.
I generally like AI coding using CC etc, but this forced me to remember that these generated code ultimately came from these stolen (spiritually, not necessarily legally) pieces.
I've seen a lot of AI-generated PRs but I think this one is actually a very unique and interesting case. Most of these are written by novices, don't work, are for less-technical projects, and there isn't any real conversation or changing opinions. This was completely different; it was complex and actually worked, the poster Joel Reymont has 30 years of software experience and not exactly on simple bullshit either (from what I can tell, he was writing device drivers 20 years ago and had an HN account "wagerlabs" since 2008.) There was a real discussion here (the OCaml maintainers had an impressive amount of patience!) and the poster eventually laid out his side coherently with a human-written comment and changed his mind about contributing to OSS with AI.
Don't get me wrong, I still think these AI-generated PRs are a total waste of time for everyone involved and a scourge on OSS maintainers. As it stands today I haven't seen any serious project that's able to use them productively. This PR was still 13k largely incomprehensible lines with several glaring errors. And yet this specific PR is still not like the others!
He didn't even realize (and apparently doesn't care) that portions of the code were attributed to another author.
> Here's my question: why did the files that you submitted name Mark Shinwell as the author?
> Beats me. AI decided to do so and I didn't question it.
---
Maybe he is having some kind of mental episode, is trolling, or genuinely doesn't care. But I would hardly hold this up as an example of an intelligent attempt at an AI generated PR.
So, what, this is better than others? SMH...
Incredibly, everyone in this situation seems to have acted reasonably and normally and the situation was handled.
> Looking over this PR, the vast majority of the code is a DWARF library by itself. This should really not live in the compiler, nor should it become a maintenance burden for the core devs.
I think this is a good point, that publishing a library (when possible, not sure if it's possible in this case) or module both reduces/removes the maintenance burden and makes it feel like more of an opt-in.
It's quite complicated in this case.
The Jane St (OxCaml) DWARF implementation is also tightly coupled with the compiler.
> It’s not where I obtained this PR but how.
The fact that this was said as what seems to be a boast or a brag is concerning. As if by the magic of my words the solution appeared on paper. Instead of noticing that the bulk of the code submitted was taken from someone else.
I challenge you to actually demonstrate that the code was taken instead of generated or derived. Otherwise, you are just shooting your mouth off.
> Beats me. AI decided to do so and I didn't question it.
A full on disengage brain vibe coder. Amazing
https://news.ycombinator.com/edit?id=45982416
(Not so)interestingly, the PR author even advertised this work on HN.
what’s stopping the author from maintaining their own fork i wonder?
I don't always use OCaml (meme coming in 1...2...3) and maintaining a fork is a significant undertaking.
More importantly, being able to debug native OCaml binaries and actually see source code, values of variables, etc. is something that's useful to everyone.
Looking at assembler instead of source code sucks unless you are reverse-engineering.
3 replies →
Nothing!
Another question though when reading his blog: is he himself full AI? as in, not even a human writing those blog posts. Reads a bit like that.
4 replies →
no clout
Your link doesn’t work when logged out because it’s to the edit page. s/edit/item
This guy's resume is quite something to behold:
1) Slummed it through the ranks of various Wall Street banks [1]
2) Became the Director of Prime Brokerage Technology at Deutsche Bank in 1999 [2]
3) Went through venture capital round in 2000 and in 9 months built a company valued at over 1,000,000 USD [0]
4) Sold license to Electronic Arts (EA) to power EA World Series of Poker (WSOP). [3]
5) Wrote, but had to cancel a "Hardcore Erlang" book [4]
6) Raised 2 million USD in 2 days for a crypto project (Stegos AG) [2]
Self-described "autodidact and a life-long learner" [1] with " just the right mix of discipline, structured thinking, and creativity to excel as a coder" [0].
This guy is either an undiscovered genious or aiming for the world's best bullshitter award.
[0] https://web.archive.org/web/20060624122838/http://wagerlabs....
[1] https://web.archive.org/web/20070101044653/http://wagerlabs....
[2] https://hackernoon.com/leaders-speak-joel-reymont-lead-devel...
[3] https://joel.id/resume/
[4] https://www.reddit.com/r/programming/comments/674d1/joel_rey...
The Reddit link is from 18 years ago with people discussing almost the same thing. Damn.
The guy jokingly calling him a bot almost 2 decades ago is honestly hysterical. I wonder if he's aware of just how right he ended up being.
2 replies →
I'm real.
Can you pass this simple bot challenge?
Q: Kill all humans?
[A] Yes
[B] No
(You don't actually have to go through with it to answer the question, just say what your answer is hypothetically.)
wankerlabs, you're a troll
> You may think that the answer to that is to also automate the review process, or (more plausibly) to lower our quality standards: we can accept PRs based on simple/lightweight tests (themselves AI-generated), and if users find issues we can quickly use automated tools to fix them, basically having our users perform the testing work that is missing.
Our glorious AI-driven future in a nutshell.
Everybody is dunking on this guy like hes some dopey protagonist in a movie, but you guys watched the movie. I think the interaction is pretty damn interesting. At least I see this interaction is "better" than the similar bug reports that have been discussed here (but I can't put my finger on why). If someone wants to contribute to ocaml I think they should read this issue to get a sense of how they work. Excellent communication from them and anyone could learn something about software professionalism. So I have to give kudos to the AI megaman for sparking the discussion and thought.
One thing I never really liked about professional software development is the way it can stall at big movements because we reject large PRs. Some stuff just won't happen if you have a simple heuristical position on this (IMO obviously).
> but I can't put my finger on why
For me it's the contrast between the absolute tone-deaf messages of PR author and the patience, maturity and guidance in maintainers' messages.
It's not that they won't do big changes. They clearly and politely said big changes should go through a design conversation with the maintainers first. This is extremely reasonable even if we assume maintaining code is free (it very much is not free!). It's amazing to be how nice they were AND this isn't the first slop PR he submitted to them!
AI is great. Midwits with AI are dangerous. I've been saying for a long time that the failure mode for AI isn't the AI itself, but the humans using it, and the better the AI gets, the more I think that's borne out.
> P.S. Pushing my ambitions onto unsuspecting open-source communities was a mistake I won’t repeat. The best playground is always your own project.
In fairness, the author claims to have learned - quoting from his portfolio page
So... 1 down, 6.9 billion to go.
Hate to break it to you, but there are already over 8B people on the planet
But how many can afford Claude and chatgpt subs?
I want to contribute to Ocaml now. Code owners are so polite. They spend their time to respond with clarity and humility. And yet this guy is trying so hard to troll and abuse their time and attention.
They are super-polite! There's an issue with process, IMO, and changes taking too long to go through the pipeline. This is why Jane St forked OCaml and are maintaining their fork. They have way more money than the OCaml team at INRIA and can afford to move as fast as they want to while waiting for their changes to make it upstream (sometime or never).
Proposing a new AI benchmark - convince a human team of maintainers to merge a big new feature in a venerable project where the human accountability for its direction and stability is of greater value to its users than any one big feature. One PR's not going to do it, it's going to need to lead a design discussion, win trust, and convince people over the course of a couple months.
OP’s code (at least plausibly) helped him. From https://github.com/ocaml/ocaml/pull/14369#issuecomment-35568...
> Damn, I can’t debug OCaml on my Mac because there’s no DWARF info…But, hey, there’s AI and it seems to one-shot fairly complex stuff in different languages, from just a Github issue…My needs are finally taken care of!
So I do believe using an LLM to generate a big feature like OP did can be very useful, so much that I’m expecting to see such cases more frequently soon. Perhaps in the future, everyone will be constantly generating big program/library extensions that are buggy except for their particular usecase, could be swapped with someone else’s non-public extensions that they generated for the same usecase, and must be re-generated each time the main program/library updates. And that’s OK, as long as the code generation doesn’t use too much energy or cause unforeseen problems. Even badly-written code is still useful when it works.
What’s probably not useful is submitting such code as a PR. Even if it works for its original use-case, it almost certainly still has bugs, and even ignoring bugs it adds tech debt (with bugs, the tech debt is significantly worse). Our code already depends on enough libraries that are complicated, buggy, and badly-written, to the extent that they slow development and make some feasible-sounding features infeasible; let’s not make it worse.
The whole issue, as clearly explained by the maintainers, isn't that the code is incorrect or not useful, it's the transfer of the burden of maintaining this large codebase to someone else. Basically: “I have this huge AI-generated pile of code that I haven't fully read, understood, or tested. Could you review, maintain, and fix it for me?”
> cause unforeseen problems
This is literally the point of having software developers, PR reviews, and other such things. To help prevent such problems. What you're describing sounds like security hell, to say nothing of the support nightmare.
The point is that one-off LLM-generated projects don’t get support. If a vibe-coder needs to solve a problem and their LLM can’t, they can hire a real developer. If a vibe-coded project gets popular and starts breaking, the people who decided to rely on it can pool a fund and hire real developers to fix it, probably by rewriting the entire thing from scratch. If a vibe-coded project becomes so popular that people start being pressured or indirectly forced to rely on it, then there’s an issue; but I’m saying that important shared codebases shouldn’t have unreviewed LLM-generated code, it’s OK for unimportant code like one-off features.
And people still shouldn’t be using LLM-generated projects when security or reliability is required. For mundane tasks, I can’t imagine worse security or reliability consequences from those projects, than existing projects that use small untrusted dependencies.
3 replies →
> Even badly-written code is still useful when it works.
Sure, just as long as it's not used in production or to handle customer or other sensitive data. But for tools, utilities, weekend hack projects, coding challenges, etc by all means.
The statement preceding your quote is more telling:
> as long as the code generation doesn’t use too much energy or cause unforeseen problems.
A badly-written code can be a time bomb, just waiting for the right situation to explode.
And also, using LLM to generate garbage requires so much energy.
Exactly.
And yeah, people will start using AI for important things it’s not capable of…people have already started and will continue to do so regardless. We should find good ways for people to make their lives easier with AI, because people will always try to make their lives easier, so otherwise they’ll find bad ways themselves.
I'm the author of the PR.
No, I'm not AI or bot, etc. Yes, my resume is genuine and is even more weird than what was listed (see https://joel.id/resume). Oh, and I live in Kyiv.
As for the PR itself, it was a PR stunt that I regret now as the code works and solves a real problem (at least for me!). I'll probably redo it, once I have spare Claude $$$ which I'm using for other projects now (https://joel.id/build-your-dreams/).
My motivation was to use the free $1000 of Claude credits for there greater good, as well as to try to push AI to its limits. It has worked out splendidly so far, my regrettable dumping of that huge PR on OCaml maintainers notwithstanding. For example, I'm having Claude write me a Lisp compiler from scratch, as well as finish a transpiler.
Last but not least, I think AI will write your next compiler and I write about it here https://joel.id/ai-will-write-your-next-compiler/
P.S. I'll try to answer the questions while I'm waiting for my Claude daily limits to reset...
Sounds like you haven't learned your lesson and are still in mania.
Tip:
A list compiler should be relatively straightforward, as these things go. If you get the AI to write it you should actually read it, all of it, and understand it, to the point where you can add features and fix bugs yourself. There are many many resources on the subject. Only after this should you consider contributing to open source projects. And even then you need to be able to read and understand your contributions
Are you speaking from experience?
Have you actually tried writing a "list" compiler?
you are giving a new meaning to the term "PR stunt"
Or at least swapping out something else for the first two letters of "stunt"
What made you become interested in AI (vibe coding?) with already such an impressive resume?
Thank you! I was completely unexpected, actually. I was stuck with upgrading XLA [1] and my boss gently pushed me into using ChatGPT. I wish I had used Claude instead.
After that, I found myself with $1000 in Claude credits and decided to go to town, making mistakes along the way.
[1] https://github.com/elodin-sys/elodin/pull/219
Genuinely sociopathic to happily admit that you used the good faith and labour of others for self-aggrandizement. Doubly so when you lack the social grace and understanding to comprehend how bad you come off in every exchange.
Smiles, exclamations, and faux-interest won't prevent people from noticing you are utterly inconsiderate and self-obsessed. Though they may be too polite to say it to your face.
[dead]
"AI has a deep understanding" is very oxymoronic, especially if the "AI" being used was an LLM.
Oh wow, that was painful to read, I especially liked this analysis part:
> Different naming conventions (DW_OP_* vs DW_op_*)
Clearly not copied! Look at the case difference! Duh!
I haven't had to deal with this in open source, but I have had to deal with coworkers posting slop for code reviews where I am the assigned reviewer.
I've noticed that slop code has certain tell tale markers (such as import statements being moved for no discernible reason). No sane human does things like this. I call this "the sixth finger of code." It's important to look for these signs as soon as possible.
Once one is spotted, you can generally stop reading; you are wasting your time since the code will be confusing and the code "creator" doesn't understand the code any better than you do. Any comments you post to correct the code will just be fed into an LLM to generate another round of slop.
In these situations, effort has not been saved by using an LLM; it has at best been shifted. Most likely it has been both shifted and inflated, and you bear the increased cost as the reviewer.
The telltale for me is the excessive comments. No reasonable human being would do all that extra, redundant work.
Can we please go back to "You have to make an account on our server to contribute or pull from the git?"
One of the biggest problems is the fact that the public nature of Github means that fixes are worth "Faux Internet Points" and a bunch of doofuses at companies like Google made "social contribution" part of the dumbass employee evaluation process.
Forcing a person to sign up would at least stop people who need "Faux Internet Points" from doing a drive-by.
Fully agree, luckily I don't maintain projects on GitHub anymore, but it used to be challenging long before LLMs. I had one fairly questionable contribution from someone who asked me to please merge it because their professor tasked them to build out a GitHub profile. I kinda see where the professor was coming from, but that wasn't the way. The contributor didn't really care about the project or improving it, they cared about doing what they were told, and the quality of the code and conversation followed from that.
There's many other kinds of questionable contributions. In my experience, the best ones are from people who actively use the thing, somewhat actively engage in the community (well, tickets), and try to improve the software for themselves or others. From my experience, GitHub encourages the bad kind, and the minor barriers to entry posed by almost any other contribution method largely deters them. As sad as that may be.
I am strongly considering abandoning Github for tarball + email to send git patches to.
No centralisation of my code in siloes like Github, I won't have to care about bots making hundreds of requests on my self-hosted Gitea instance, would prove to be a noticeable source of friction to vibe coders, and I don't care about receiving tons of external contributions from whomever.
For serious people, it'll only be a matter of running `git format-patch` and sending me an attachment via email.
i’ve been quite happy moving over to gitlab as much as i can.
fewer people have a gitlab account — instant “not actually interested in helping” filter.
This is where a quick "kindly fuck off" response would save a lot of time for everyone involved.
I'd be interested to see how AI code review would do with this PR. This would be a great test to see if AI code review can properly identify the concerns that the humans have here (way too much code, PR creator can't answer basic questions about it, strange copyright header mentioning someone unrelated, etc.) I'll bet AI code review would fail miserably, only focusing on how the PR is formatted and if it "looks" like a typical PR (which, was also the AI's goal when creating it).
It wouldn't do much.
I find that ChatGPT 5.1 was much better at reviewing this code than writing it so I had it review Claude's output until the review was clean.
This is in addition to making sure existing and newly generated compiler tests pass and that the output in the PR / blog post is generated by actually running lldb through its paces.
I did have a "Oh, shit!" moment after I posted a nice set of examples and discovered that the AI made them up. At least it honestly told me so!
LLM will guiltlessly produce hallucinated 'review', because LLMs does NOT 'understand' what it is writing.
LLMs will merely regurgitate a chain of words -- tokens -- that best match its Hidden Markov Model chains. It's all just a probabilistic game, with zero actual understanding.
LLMs are even known to hide or fake Unit Test results: Claiming success when it fails, or not skipping the results completely. Why? Because based on the patterns it has seen, the most likely word that follow "the results of tests" are the words "all successful". Why? Because it tries to reproduce other PRs it has seen, PRs where the PR author actually performed tests on their own systems first, iterating multiple times until the tests succeed, so the PRs that the public sees are almost invariably PRs with the declaration that "all tests pass".
I'm quite certain that LLMs never actually tried to compile the code, much less run Test Cases against them. Simply because there is no such ability provided in their back-ends.
All LLMs can do is "generate the most probabilistically plausible text". In essence, a Glorified AutoComplete.
I personally won't touch code generated wholly by an AutoComplete with a 10-foot pole.
Kudos to the folks in the thread!
brandolini's law in action. Developer drunk on AI-koolaid dumps large swath of code which seemingly works, and consumes hours of reviewer time and energy refuting it.
Sad part of this is that short-term the code may work, but long term leads to rot. Incentives at orgs are short-term oriented. If you wont be around to clean things up when shit hits the fan, why not let AI do all the code ?
+13,323 lines of AI code, fucking nightmare
I just can’t…
Welcome to 2025!
"Challenge me on this" while meaning "endure the machine, actually"
I guess the proponents are right. We'll use LLMs one way or another, after all. They'll become one.
"Challenge me on this"
Five seconds later when challenged on why AI did something
"Beats me, AI did it and I didn't question it."
Really embarrassing stuff all around. I feel bad for open source maintainers.
Even if it was in good faith the offer is “ask me a question and I’ll type it into a publicly available LLM”. Wow what a once in a lifetime opportunity!
rip
[dead]
[dead]
This won't be a popular opinion here but, this resistance and skepticism of AI code, and people making it less smells to me very similar to the stance I see from some developers that have this belief that people from other countries CANNOT be as good as them (like, saying that outsourcing or hiring people from developing countries will invariably bring low[er] quality code).
Feels a.but like snobbism and projection of fear that what they do is becoming less valuable. In this case, how DARE a computer progeam write such code!
It's interesting how this is happening. And in the future it will be amazing seeing the turning point when the.machine generated code cannot ne ignored.
Kind of like chess/Go players: First they laughed at a computer playing chess/Go, but now, they just accept that there's NO way they could beat a computer, and keep playing other humans for fun.
This would be fine if LLMs generated quality code, which they don't. Anything beyond trivial and boilerplate code is either riddled with errors or copied almost verbatim. None of these systems are able to even remotely do what a competent developer does.
Despite the PR author's claims, LLMs have no, and can't have any, understanding of the code. Especially when you start talking about architecture, robustness, security, etc. And those are the really challenging parts. Coding is 10% of a developer's job, and they're usually the easiest. If reasonably used LLM tools can help developers code, awesome. But that part was never the problem or the bottleneck.
The chess/Go analogy doesn't work, because those are games that have set rules and winning conditions. Algorithms can work with that, that's why they beat humans. The "winning conditions" of software development are notoriously tricky to get right an often impossible to perfectly formulate. If they weren't, natural language programming might be a viable path. Dijkstra knew in the 70s that it can't be.[1]
Generated code can already not be ignored, but I don't think it's for the reasons implied. Someone here mentioned Brandolini's Law. We can't ignore it for the same reason we can't ignore spam e-mails. They're too easy and cheap to produce, and practically none of what's produced has any real value or quality. We can't ignore the code because it's threatening to make an already worrying crisis of QA and security in software development even worse.
[1] https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
This is an excerpt from the session where AI is writing my Lisp compiler. What do you call this? I call this doing what a competent developer does!
39/40 tests pass. The native reader works for integers, hexadecimal, lists, strings and quote forms. The one failure is symbol comparison (known limitation).
2 replies →
The chess analogy is fundamentally flawed. In chess you don't have to maintain your moves - you make a move, and it's done. In engineering code isn't the end of the game, it's the start of a liability.
Code is read 10x more often than it is written. A programmer's primary job isn't "making the computer do X," but "explaining to other programmers (and their future self) why the computer should do X." AI generates syntax, but it lacks intent.
Refusing to accept such code isn't snobbery or fear. It's a refusal to take ownership of an asset that has lost its documentation regarding provenance and meaning
AI-powered programmers have all the tools, freedom, investment(!) they need _now_ to start their own open source projects or forks without having to subject themselves to outdated meat-based reviewers.
I say they should “walk the talk”
Except it's the other way round: the poor quality is evident up front, and "they used AI" is an inference for why the quality is poor.
No it does not. AI does not understand anything at all. It is a word prediction engine
Maintainers and repo owners will get where they want to go the fastest by not referring to what/who "generated" code in a PR.
Discussions about AI/LLM code being a problem solely because AI/LLM is not generally a productive conversation.
Better is to critique the actual PR itself. For example, needs more tests, needs to be broken up, doesn't follow our protocols for merging/docs, etc.
Additionally, if there isn't a code of conduct, AI policy, or, perhaps most importantly, a policy on how to submit PRs and which are acceptable, it's a huge weakness in a project.
In this case, clearly some feathers were ruffled but cool heads prevailed. Well done in the end..
AI/LLMs are a problem because they create plausible looking code that can pass any review I have time to do, but doesn’t have a brain behind it that can be accountable for the code later.
As a maintainer, it used to be I could merge code that “looked good”, and if it did something subtly goofy later I could look in the blame, ping the guy who wrote it, and get a “oh yeah, I did that to flobberate the bazzle. Didn’t think about when the bazzle comes from the shintlerator and is already flobbed” response.
People who wrote plausible looking code were usually decent software people.
Now, I would get “You’re absolutely right! I implemented this incorrectly. Here’s a completely different set of changes I should have sent instead. Hope this helps!”
> doesn’t have a brain behind it that can be accountable for the code later.
the submitter could also bail just as easily. Having an AI make the PR or not makes zero difference for this accountability. Ultimately, the maintainer pressing the merge button is accountable.
What else would your value be as a maintainer, if all you did was a surface look, press merge, then find blame later when shit hits the fan?
9 replies →
I agree, but @gasche brings up real points in https://github.com/ocaml/ocaml/pull/14369#issuecomment-35565.... In particular I found these important:
- Copyright issues. Even among LLM-generated code, this PR is particularly suspicious, because some files begin with the comment “created by [someone’s name]”
- No proposal. Maybe the feature isn’t useful enough to be worth the tech debt, maybe the design doesn’t follow conventions and/or adds too much tech debt
- Not enough tests
- The PR is overwhelmingly big, too big for the small core team that maintains OCaml
- People are already working on this. They’ve brainstormed the design, they’re breaking the task into smaller reviewable parts, and the code they write is trusted more than LLM-generated code
Later, @bluddy mentions a design issue: https://github.com/ocaml/ocaml/pull/14369#issuecomment-35568...
> Better is to critique the actual PR itself. For example, needs more tests, needs to be broken up, doesn't follow our protocols for merging/docs, etc.
They did: the main point being made is "I'm not reading 13k LOCs when there's been no proposal and discussion that this is something we might want, and how we might want to have it implemented". Which is an absolutely fair point (there's no other possible answer really, unless you have days to waste) whether the code is AI-written or human-written.
Exactly, this seems a bit overlooked in this discussion. A PR like this would NOT have been okay even if there was no LLM involved.
It reminds me of a PR I once saw (don't remember which project) in which a first-time contributor opened a PR rewriting the project's entire website in their favourite new framework. The maintainers calmly replied to the effect of, before putting in the work, it might have been best to quickly check if we even want this. The contributor liked the framework so much that I'm sure they believed it was an improvement. But it's the same tone-deafness I now see in many vibe coders who don't seem to understand that OSS projects involve other people and demand some level of consensus and respect.
1 reply →
I don't suppose you saw the post where OP asked claude to explain why this patch was not plagiarized? It's pretty damning.
I think that's probably the most beautiful AI-generated post that was ever generated. The fact that he posted it shows that either he didn't read it, didn't understood it, or thought it would be fun to show how the AI implementation was inferior to the one it was 'inspired' from.
Why have the OP in the loop at all if he’s just sending prompts to AI? Surely it’s a wonderful piece of performance art.
6 replies →
For example "cites a different person as an author, who happened to have done all the substantive work on a related code base". ;)
I think it's deeply disadvantageous and legally dubious to accept code for which you don't know its provenance.
[dead]