Comment by Aeolun
12 hours ago
> We disavow AI because people like Brenda are perfect and the machine is error-prone.
No, no. We disavow AI because our great leaders inexplicably trust it more than Brenda.
12 hours ago
> We disavow AI because people like Brenda are perfect and the machine is error-prone.
No, no. We disavow AI because our great leaders inexplicably trust it more than Brenda.
I don't understand why generative AI gets a pass at constantly being wrong, but an average worker would be fired if they performed the same way. If a manager needed to constantly correct you or double check your work, you'd be out. Why are we lowering the bar for generative AI?
Multiple reasons:
* Gen AI never disagrees with or objects to boss's ideas, even if they are bad or harmful to the company or others. In fact, it always praises them no matter what. Brenda, being a well-intentioned human being, might object to bad or immoral ideas to prevent harm. Since boss's ego is too fragile to accept criticism, he prefers gen AI.
* Boss is usually not qualified, willing, or free to do Brenda's job to the same quality standard as Brenda. This compels him to pay Brenda and treat her with basic decency, which is a nuisance. Gen AI does not demand fair or decent treatment and (at least for now) is cheaper than Brenda. It can work at any time and under conditions Brenda refuses to. So boss prefers gen AI.
* Brenda takes accountability for and pride in her work, making sure it is of high quality and as free of errors as she can manage. This is wasteful: boss only needs output that is good enough to make it someone else's problem, and as fast as possible. This is exactly what gen AI gives him, so boss prefers gen AI.
My kneejerk reaction is the sunk cost fallacy (AI is expensive), but I'm pretty sure it's actually because businesses have spent the last couple of decades doing absolutely everything they can to automate as many humans out of the workforce as possible.
If a worker could be right 50% of the time and get paid 1 cent to write a 5000 word essay on a random topic, and do it in less than 30 seconds.
Then I think managers would be fine hiring that worker for that rate as well.
5000 half-right words is worthless output. That can even lead to negative productivity.
1 reply →
great, now who are you paying to sort the right output from the wrong output?
There's a variety of reasons.
You don't have a human to manage. The relationship is completely one-sided, you can query a generative AI at 3 in the morning on new years eve. This entity has no emotions to manage and no own interests.
There's cost.
There's an implicit promise of improvement over time.
There's an the domain of expertise being inhumanly wide. You can ask about cookies right now, then about XII century France, then about biochemistry.
The fact that an average worker would be fired if they perform the same way is what the human actually competes with. They have responsibility, which is not something AI can offer. If it was the case that, say, Anthropic, actually signed contracts stating that they are liable for any mistakes, then humans would be absolutely toast.
It’s much cheaper than Brenda (superficially, at least). I’m not sure a worker that costs a few dollars a day would be fired, especially given the occasional brilliance they exhibit.
I've been trying to open my mind and "give AI a chance" lately. I spent all day yesterday struggling with Claude Code's utter incompetence. It behaves worse than any junior engineer I've ever worked with:
- It says it's done when its code does not even work, sometimes when it does not even compile.
- When asked to fix a bug, it confidently declares victory without actually having fixed the bug.
- It gets into this mode where, when it doesn't know what to do, it just tries random things over and over, each time confidently telling me "Perfect! I found the error!" and then waiting for the inevitable response from me: "No, you didn't. Revert that change".
- Only when you give it explicit, detailed commands, "modify fade_output to be -90," will it actually produce decent results, but by the time I get to that level of detail, I might as well be writing the code myself.
To top it off, unlike the junior engineer, Claude never learns from its mistakes. It makes the same ones over and over and over, even if you include "don't make XYZ mistake" in the prompt. If I were an eng manager, Claude would be on a PIP.
Recently I've used Claude Code to build a couple TUIs that I've wanted for a long time but couldn't justify the time investment to write myself.
My experience is that I think of a new feature I want, I take a minute or so to explain it to Claude, press enter, and go off and do something else. When I come back in a few minutes, the desired feature has been implemented correctly with reasonable design choices. I'm not saying this happens most of the time, I'm saying it happens every time. Claude makes mistakes but corrects them before coming to rest. (Often my taste will differ from Claude's slightly, so I'll ask for some tweaks, but that's it.)
The takeaway I'm suggesting is that not everyone has the same experience when it comes to getting useful results from Claude. Presumably it depends on what you're asking for, how you ask, the size of the codebase, how the context is structured, etc.
1 reply →
Learning to use Claude Code (and similar coding agents) effectively takes quite a lot of work.
Did you have it creating and running automated tests as it worked?
1 reply →
> - It says it's done when its code does not even work, sometimes when it does not even compile.
> - When asked to fix a bug, it confidently declares victory without actually having fixed the bug.
You need to give it ways to validate its work. A junior dev will also give you code that doesn't compile or should have fixed a bug but doesn't if they don't actually compile the code and test that the bug is truly fixed.
1 reply →
yOu'Re HoLdInG iT wRoNg
How much compute costs is it for the AI to do Brenda's job? Not total AI spend, but the fraction that replaced Brenda. That's why they'd fire a human but keep using the AI.
Brenda has been kissed on her forehead by the Excel goddess herself. She is irreplaceable.
(More seriously, she also has 20+ years of institutional knowledge about how the company works, none of which has ever been captured anywhere else.)
It's not just compute, its also the setup costs - How much did you have to pay someone to feed the AI Brenda's decades of knowledge specific to her company and all the little special cases of how it does business.
Because it doesn’t have to be as accurate as a human to be a helpful tool.
That is precisely why we have humans in the loop for so many AI applications.
If [AI + human reviewer to correct it] is some multiple more efficient than [human alone], there is still plenty of value.
> Because it doesn’t have to be as accurate as a human to be a helpful tool.
I disagree. If something can't be as accurate as a (good) human, then it's useless to me. I'll just ask the human instead, because I know that the human is going to be worth listening to.
6 replies →
Because it's much cheaper.
So now you don't have to pay people to do their actual work, you assign the work to ML ("AI") and then pay the people to check what it generated. That's a very different task, menial and boring, but if it produces more value for the same amount of input money, then it's economical to do so.
And since checking the output is often a lower skilled job, you can even pay the people less, pocketing more as an owner.
It’s not even greater trust. It’s just passive trust. The thing is, Brenda is her own QA department. Every good Brenda is precisely good because she checks her own work before shipping it. AI does not do this. It doesn’t even fully understand the problem/question sometimes yet provides a smart definitive sounding answer. It’s like the doctor on The Simpson’s, if you can’t tell he’s a quack, you probably would follow his medical advice.
> Every good Brenda is precisely good because she checks her own work before shipping it. AI does not do this.
A confident statement that's trivial to disprove. I use claude code to build and deploy services on my NAS. I can ask it to spin up a new container on my subdomain and make it available internal only or also available externally. It knows it has access to my Cloudflare API key. It knows I am running rootless podman and my file storage convention. It will create the DNS records for a cloudflared tunnel or just setup DNS on my pihole for internal only resolution. It will check to make sure podman launched the container and it will then try to make an HTTP request to the site to verify that it is up. It will reach for network tools to test both the public and private interfaces. It will check the podman logs for any errors or warnings. If it detects errors, it will attempt to resolve them and is typically successful for the types of services I'm hosting.
Instructions like: "Setup Jellyfin in a container on the NAS and integrate it with the rest of the *arr stack. I'd like it to be available internally and externally on watch.<domain>.com" have worked extremely well for me. It delivers working and integrated services reliably and does check to see that what it deployed is working all without my explicit prompting.
Brenda + AI > Brenda
That’s definitely the hype. But I don’t know if I agree. I’m essentially a Brenda in my corporate finance job and so far have struggled to find any useful scenarios to use AI for.
I thought once this can build me a Gantt chart because that’s an annoying task in excel. I had the data. When I asked it to help me, “I can’t do that but I can summarize your data”. Not helpful.
Any type of analysis is exactly what I don’t want to trust it with. But I could use help actually building things, which it wouldn’t do.
Also, Brenda’s are usually fast. Having them use a tool like AI that can’t be fully trusted just slows them down. So IMO, we haven’t proven the AI variable in your equation is actually a positive value.
2 replies →
But execs aren't talking about that, they are talking about firing Brenda, or replacing her with a junior version.
Yes but:
1 reply →
> No, no. We disavow AI because our great leaders inexplicably trust it more than Brenda.
I would add a little nuance here.
I know a lot of people who don't have technical ability either because they advanced out of hands-on or never had it because it wasn't their job/interest.
These types of people are usually the folks who set direction or govern the purse strings.
here's the thing: They are empowered by AI. they can do things themselves.
and every one of them is so happy. They are tickled pink.
They want to trust it, because then they can stop paying Brenda, save a few dollars, and buy a 3rd yacht.
“Let’s deploy something as or more error prone as Brad at infinite scale across our organisation”