← Back to context

Comment by dawnbreez

4 days ago

Throwing my two cents in here...I think there's a disconnect between what AI advocates want, and what everyone else wants.

The arguments against genAI tend to point out things like: 1. Its output is unreliable at best 2. That output often looks correct to an untrained eye and requires expert intervention to catch serious mistakes 3. The process automates away a task that many people rely on for income

And the response from genAI advocates tends to be dismissive...and I suspect it is, in part, because that last point is a positive for many advocates of genAI. Nobody wants to say it out loud, but when someone on Reddit or similar claims that even a 10% success rate outweighs the 90% failure rate, what they mean is most likely "A machine that works 10% of the time is better than a programmer who works 60-80% of the time because the machine is more than 6-to-8-times cheaper than the programmer".

There's also the classic line about how automation tends to create more jobs in the future than it destroys now, which itself is a source of big disconnects between pro-genAI and anti-genAI crowds--because it ignores a glaring issue: Just because there's gonna be more jobs in the future, doesn't mean I can pay rent with no job tomorrow!

"You can write an effective coding agent in a week" doesn't reassure people because it doesn't address their concerns. You can't persuade someone that genAI isn't a problem by arguing that you can easily deploy it, because part of the concern is that you can easily deploy it. Also, "you’re not doing what the AI boosters are doing" is flat-out incorrect, at least if you're looking at the same AI boosters I am--most of the people I've seen who claim to be using generated code say they're doing it with Claude, which--to my knowledge--is just an LLM, albeit a particularly advanced one. I won't pretend this is anything but anecdata, but I do engage with people who aren't in the "genAI is evil" camp, and...they use Claude for their programming assistance.

"LLMs can write a large fraction of all the tedious code you’ll ever need to write" further reinforces this disconnect. This is exactly why people think this tech is a problem.

The entire section on "But you have no idea what the code is!" falls apart the moment you consider real-world cases, such as [CVE-2025-4143](https://nvd.nist.gov/vuln/detail/cve-2025-4143), where a programmer who is a self-described expert working with Claude--who emphasizes that he checked over the results with a fine-toothed comb, and that he did this to validate his own skepticism about genAI!--missed a fundamental mistake in implementing OAuth that has been common knowledge for a long while. The author is correct in that reading other people's code is part of the job...but this is difficult enough when the thing that wrote the code can be asked about its methods, and despite advances in giving LLMs a sort of train of thought, the fact remains that LLMs are designed to output things that "look truth-y", not things that are logically consistent. (Ah, but we're not talking about LLMs, even though kentonv tells us that he just used an LLM. We're talking about agentic systems. No true AI booster would "just" use an LLM...)

I actually agree with the point about how the language can catch and point out some of the errors caused by hallucination, but...I can generate bad function signatures just fine on my own, thank you! :P In all seriousness, this addresses basically nothing about the actual point. The problem with hallucination in a setting like this isn't "the AI comes up with a function that doesn't exist", that's what I'm doing when I write code. The problem with hallucination is that sometimes that function which doesn't exist is my RSA implementation, and the AI 'helpfully' writes an RSA implementation for me, a thing that you should never fucking do because cryptography is an incredibly complex thing that's easy to fuck up and hard to audit, and you really ought to just use a library...a thing you [also shouldn't leave up to your AI.](https://www.theregister.com/2025/04/12/ai_code_suggestions_s...) You can't fix that with a language feature, aside from having a really good cryptography library built into the language itself, and as much as I'd love to have a library for literally everything I might want to do in a language...that's not really feasible.

"Does an intern cost $20/month? Because that’s what Cursor.ai costs," says the blog author, as if that's supposed to reassure me. I'm an intern. My primary job responsibility is getting better at programming so I can help with the more advanced things my employer is working on (for the record, these thoughts are my own and not those of my employer). It does not make me happy to know that Cursor.ai can replace me. This also doesn't address the problem that, frankly, large corporations aren't going to replace junior developers with these tools; they're going to replace senior developers, because senior developers cost more. Does a senior engineer cost 20 dollars a month? Because that's what Cursor.ai costs!

...and the claim that open source is just as responsible for taking jobs is baffling. "We used to pay good money for databases" is not an epic own, it is a whole other fucking problem. The people working on FOSS software are in fact very frustrated with the way large corporations use their tools without donating so much as a single red cent! This is a serious problem! You know that XKCD about the whole internet being held up by a project maintained by a single person in his free time? That's what you're complaining about! And that guy would love to be paid to write code that someone can actually fucking audit, but nobody will pay him for it, and instead of recognizing that the guy ought to be supported, you argue that this is proof that nobody else deserves to be supported. I'm trying to steelman this blogpost, I really am, but dude, you fundamentally have this point backwards.

I hope this helps others understand why this blogpost doesn't actually address any of my concerns, or the concerns of other people I know. That's kind of the best I can hope for here.

> 1. Its output is unreliable at best

> 2. That output often looks correct to an untrained eye and requires expert intervention to catch serious mistakes

The thing is this is true of humans too.

I review a lot of human code. I could easily imagine a junior engineer creating CVE-2025-4143. I've seen worse.

Would that bug have happened if I had written the code myself? Not sure, I'd like to think "no", but the point is moot anyway: I would not have personally been the one to write that code by hand. It likely would have gone to someone more junior on the team, and I would have reviewed their code, and I might have forgotten to check for this all the same.

In short, whether it's humans or AI writing the code, it was my job to have reviewed the code carefully, and unfortunately I missed here. That's really entirely on me. (It's particularly frustrating for me as this particular bug was on my list of things to check for and somehow I didn't.)

> 3. The process automates away a task that many people rely on for income

At Cloudflare, at least, we always have 10x more stuff we want to work on then we have engineers to work on it. The number of engineers we can hire is basically dictated by revenue. If each engineer is more productive, though, then we can ship features faster, which hopefully leads to revenue growing faster. Which means we hire more engineers.

I realize this is not going to be true everywhere, but in my particular case, I'm confident saying that my use of AI did not cause any loss of income for human engineers, and likely actually increased it.

  • I mean, fair. It's true that humans aren't that great at writing code that can't be exploited, and the blogpost makes this point too: between a junior engineer's output and an LLM's output, the LLM does the same thing for cheaper.

    I would argue that a junior engineer has a more valuable feature--the ability to ask that junior engineer questions after the fact, and ideally the ability to learn and eventually become a senior engineer--but if you're looking at just the cost of a junior engineer doing junior engineer things...yeah, no, the LLM does it more efficiently. If you assume that the goal is to write code cheaper, LLMs win.

    However, I'd like to point out--again--that this isn't going to be used to replace junior engineers, it's going to be used to replace senior engineers. Senior engineers cost more than junior engineers; if you want each engineer to be more productive per-dollar (and assume, like many shareholders do, that software engineers are fungible) then the smart thing to do is replace the more costly engineer. After all, the whole point of AI is to be smart enough to automate things, right?

    You and I understand that a senior engineer's job is very different from a junior engineer's job, but a stockholder doesn't--because a stockholder only needs to know how finance works to be a successful stockholder. Furthermore, the stockholder's goal is simply to make as much money as possible per quarter--partly because he can just walk out if the company starts going under, often with a bigger "severance package" than any of the engineers in the company. The incentives are lined up not only for the stockholder to not know why getting rid of senior engineers is a bad idea, but to not care. Were I in your position, I would be worried about losing my job, not because I didn't catch the issue, but because

    Aside: Honestly, I don't really blame you for getting caught out by that bug. I'm by no means an expert on anything to do with OAuth, but it looks like the kind of thing that's a nightmare to catch, because it's misbehavior under the kind of conditions that are--well, only seen when maliciously crafted. If it wasn't something that was known about since the RFC, it would probably have taken a lot longer for someone to find it.

    • Luckily, shareholders do not decide who to hire and fire. The actual officers of the company, hopefully, understand why senior engineers are non-fungible. Tech companies, at least, seem to understand this well. (I do think a lot of non-tech companies that nevertheless have software in their business get this wrong, and that's why we see a lot of terrible software out there. E.g. most car companies.)

      As for junior engineers, despite the comparisons in coding skill level, I don't think most people are suggesting that AI should replace junior engineers. There are a lot of things humans do which AI still can't, such as seeing the bigger picture that the code is meant to implement, and also, as you note, learning over time. An LLM's consciousness ends with its context window.

    • Apparently I missed the end of a sentence near the end there. "But because" on the fourth paragraph is supposed to be "but because the sales pitch is that the machine can replace me". Oops.